././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.292778 openstack_placement-13.0.0/0000775000175000017500000000000000000000000015635 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/.coveragerc0000664000175000017500000000010000000000000017745 0ustar00zuulzuul00000000000000[run] branch = True source = placement omit = placement/tests/* ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/.pre-commit-config.yaml0000664000175000017500000000212000000000000022111 0ustar00zuulzuul00000000000000--- repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.6.0 hooks: - id: trailing-whitespace - id: mixed-line-ending args: ['--fix', 'lf'] exclude: '.*\.(svg)$' - id: check-byte-order-marker - id: check-executables-have-shebangs - id: check-merge-conflict - id: debug-statements - id: check-json files: .*\.json$ - id: check-yaml files: .*\.(yaml|yml)$ - repo: https://github.com/Lucas-C/pre-commit-hooks rev: v1.5.5 hooks: - id: remove-tabs exclude: '.*\.(svg)$' - repo: https://opendev.org/openstack/hacking rev: 6.1.0 hooks: - id: hacking additional_dependencies: [] exclude: '^(doc|releasenotes|tools)/.*$' - repo: https://github.com/hhatto/autopep8 rev: v2.3.1 hooks: - id: autopep8 files: '^.*\.py$' - repo: https://github.com/sphinx-contrib/sphinx-lint rev: v0.9.1 hooks: - id: sphinx-lint args: [--enable=default-role] files: ^doc/|releasenotes|api-guide types: [rst] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/.stestr.conf0000664000175000017500000000123000000000000020102 0ustar00zuulzuul00000000000000[DEFAULT] test_path=./placement/tests/unit top_dir=./ # The group_regex describes how stestr will group tests into the same process # when running concurrently. The following ensures that gabbi tests coming from # the same YAML file are all in the same process. This is important because # each YAML file represents an ordered sequence of HTTP requests. Note that # tests which do not match this regex will not be grouped in any special way. # See the following for more details. # http://stestr.readthedocs.io/en/latest/MANUAL.html#grouping-tests # https://gabbi.readthedocs.io/en/latest/#purpose group_regex=placement\.tests\.functional\.test_api(?:\.|_)([^_]+) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/.zuul.yaml0000664000175000017500000001112400000000000017575 0ustar00zuulzuul00000000000000# Initial set of jobs that will be extended over time as # we get things working. # TODO(gmann): As per the 2025.1 testing runtime, we need to run at least # one job on Focal. This job can be removed in the next cycle (2025.2) - job: name: tempest-integrated-placement-ubuntu-jammy description: This is integrated placement job testing on Ubuntu Jammy(22.04) parent: tempest-integrated-placement nodeset: openstack-single-node-jammy - project: templates: # The integrated-gate-placement template adds the # tempest-integrated-placement and grenade jobs. # tempest-integrated-placement runs a subset of tempest tests which are # relevant for placement, e.g. it does not run keystone tests. - check-requirements - integrated-gate-placement - openstack-cover-jobs - openstack-python3-jobs - periodic-stable-jobs - publish-openstack-docs-pti - release-notes-jobs-python3 check: jobs: - openstack-tox-functional-py39 - openstack-tox-functional-py312 - openstack-tox-pep8 - placement-nova-tox-functional-py312 - placement-nested-perfload: voting: false - placement-perfload: voting: false - tempest-integrated-placement: # Alias 'gate-irrelevant-files' define the set of irrelevant-files # for which integrated testing jobs not required to run. If # changes are only to those files then, zuul can skip the # integrated testing job to save the infra resources. # 'gate-irrelevant-files' should be used for integrated gate # jobs only not for any other jobs like functional, unit, doc # jobs. irrelevant-files: &gate-irrelevant-files - ^api-.*$ - ^.*\.rst$ - ^.git.*$ - ^doc/.*$ - ^placement/tests/.*$ - ^releasenotes/.*$ - ^tools/.*$ - ^tox.ini$ - tempest-integrated-placement-ubuntu-jammy: irrelevant-files: *gate-irrelevant-files - grenade: irrelevant-files: *gate-irrelevant-files - grenade-skip-level: irrelevant-files: *gate-irrelevant-files - tempest-ipv6-only: irrelevant-files: *gate-irrelevant-files gate: jobs: - openstack-tox-functional-py39 - openstack-tox-functional-py312 - openstack-tox-pep8 - placement-nova-tox-functional-py312 - tempest-integrated-placement: irrelevant-files: *gate-irrelevant-files - tempest-integrated-placement-ubuntu-jammy: irrelevant-files: *gate-irrelevant-files - grenade: irrelevant-files: *gate-irrelevant-files - grenade-skip-level: irrelevant-files: *gate-irrelevant-files - tempest-ipv6-only: irrelevant-files: *gate-irrelevant-files periodic-weekly: jobs: # update the python version when the support runtime for testing changes. # we only test the latest version in the periodics as its just a signal # that we need to investigate the health of the master branch in the absence # of frequent patches. - openstack-tox-functional-py312 - openstack-tox-py312 - placement-nova-tox-functional-py312 - tempest-integrated-placement - job: name: placement-nova-tox-functional-py312 parent: nova-tox-functional-py312 description: | Run the nova functional tests to confirm that we aren't breaking the PlacementFixture. vars: # 'functional-without-sample-db-tests' tox env is defined in nova tox.ini # to skip the api|notification _sample_tests and db-related tests. tox_envlist: functional-without-sample-db-tests - job: name: placement-perfload parent: base description: | A simple node on which to run placement with the barest of configs and make performance related tests against it. required-projects: - opendev.org/openstack/placement irrelevant-files: - ^.*\.rst$ - ^api-ref/.*$ - ^doc/.*$ - ^releasenotes/.*$ - ^.git.*$ - ^placement/tests/.*$ - ^tox.ini$ run: playbooks/perfload.yaml post-run: playbooks/post.yaml - job: name: placement-nested-perfload parent: placement-perfload description: | A simple node on which to run placement with the barest of configs and make nested performance related tests against it. timeout: 3600 run: playbooks/nested-perfload.yaml ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591511.0 openstack_placement-13.0.0/AUTHORS0000664000175000017500000006406000000000000016713 0ustar00zuulzuul00000000000000Aaron Lee Aaron Lee Aaron Rosen Aarti Kriplani Abhishek Chanda Abhishek Kekane Adam Gandelman Adam Gandelman Adam Gandelman Adam Johnson Adam Spiers Aditi Rajagopal Aditi Raveesh Adrian Smith Adrien Cunin Ahmad Hassan Akihiro Motoki Alessandro Pilotti Alessandro Pilotti Alessio Ababilov Alessio Ababilov Alex Gaynor Alex Glikson Alex Meade Alex Szarka Alex Xu AlexFrolov Alexander Bochkarev Alexandra Settle Alexei Kornienko Alexey I. Froloff Alexey Roytman Alexis Lee Alexis Lee Alfredo Moralejo Allen Gao Alvaro Lopez Garcia Ameed Ashour Amit Uniyal Amy Marrich (spotz) Andrea Rosa Andreas Jaeger Andreas Jaeger Andrew Bogott Andrew Clay Shafer Andrew Laski Andrew Laski Andrew Melton Andrey Kurilin Andrey Pavlov Andrey Volkov Andy McCrae Andy Smith Andy Southgate Angus Lees Anh Tran Anita Kuno Anne Gentle Anthony Young Anton Arefiev Anton V. Yanchenko Antony Messerli Anusha Unnam Arata Notsu Arathi Armando Migliaccio Armando Migliaccio Arnaud Morin Artom Lifshitz Arvind Nadendla Attila Fazekas Augustina Ragwitz Balazs Gibizer Balazs Gibizer Balazs Gibizer Belmiro Moreira Ben McGraw Ben Nemec Ben Nemec Boris Filippov Boris Pavlovic Brad Hall Brad Pokorny Brant Knudson Brian Elliott Brian Elliott Brian Lamar Brian Rosmaita Brian Schott Brian Schott Brian Waldon Brian Waldon Brianna Poulos Burt Holzman Béla Vancsics Cao Xuan Hoang Cedric LECOMTE Cerberus Chandan Kumar Chang Bo Guo ChangBo Guo(gcb) Changbin Liu Chen Chet Burgess Chiradeep Vittal Chris Behrens Chris Dent Chris Friesen Chris Krelle Chris Yeoh Christian Berendt Christian Rohmann Christopher Lefelhocz Christopher Yeoh Chuck Carmack Chuck Short Chuck Short Cian O'Driscoll Clark Boylan Claudiu Belu Clint Byrum Cole Robinson Colleen Murphy Corey Bryant Cory Wright Craig Vyvial Cyril Roelandt Dan Prince Dan Prince Dan Smith Dan Smith Dan Smith Dan Wendlandt Dane Fichter Daniel Abad Daniel P. Berrange Dao Cong Tien Davanum Srinivas Davanum Srinivas Dave Walker (Daviey) David Kang David Pravec David Ripton David Shrewsbury David Subiros Dean Troyer Deepak Garg Derek Higgins Devananda van der Veen Devdatta Kulkarni Devendra Modium Devin Carlen Devin Carlen Dheeraj Gupta Diana Clarke Dina Belova Dinesh Bhor Dirk Mueller Dmitry Spikhalskiy Dmitry Tantsur Dolph Mathews Don Dugger Donal Lafferty Doug Hellmann Doug Hellmann Doug Hellmann Drew Thorstensen Duan Jiong Duncan McGreggor Ed Leafe EdLeafe Einst Crazy Eldar Nugaev Eldar Nugaev Eli Qiao Eli Qiao Elod Illes Előd Illés Emilien Macchi Eoghan Glynn Eric Brown Eric Day Eric Fried Eric Fried Eric Guo Eric Windisch Eric Young Erik Olof Gunnar Andersson Esra Celik Eugene Kirpichov Eugene Nikanorov Eugeniya Kudryashova Ewan Mellor Ewan Mellor Fang Jinxing Feodor Tersin Flavia Missi Flavio Percoco Gabe Westmaas Gabor Antal Gary Kotton Gary Kotton Ghanshyam Ghanshyam Mann Ghanshyan Mann Ghe Rivero Grant Murphy Guillaume Boutry Gábor Antal Haiwei Xu Hans Lindgren Harshada Mangesh Kakad He Jie Xu He Jie Xu He Yongli Hemanth Makkapati Hengqing Hu Hervé Beraud Hesam Chobanlou Hieu LE Hirofumi Ichihara Hironori Shiina Hisaharu Ishii Hongbin Lu Huan Xie Huang Rui Ian Cordasco Ian Wienand Ildiko Vancsa Ildiko Vancsa Ilya Alekseyev Ilya Alekseyev Ilya Pekelny Ionuț Arțăriși Isaku Yamahata Ivan A. Melnikov Jackie Truong Jake Dahn Jake Liu James Carey James E. Blair James E. Blair James E. Blair James E. Blair Jamie Lennox Jason Cannavale Jason Dillaman Jason Koelker Jason Kölker Jason.Zhao Jay Lau Jay Pipes Jay S. Bryant Jeffrey Zhang Jens Harbott Jens Rosenboom Jeremy Stanley Jesse Andrews Jesse Andrews Jesse Andrews Jesse Pretorius Jiajun Liu Jiajun Liu Jian Wen Jim Fehlig Jim Rollenhagen Jimmy Bergman Jinwoo 'Joseph' Suh Joe Cropper Joe Gordon Joe Gordon Joe Heck Joel Coffman Joel Moore joelbm24@gmail.com <> Johannes Erdfelt Johannes Erdfelt Johannes Erdfelt Johannes Kulik John Bresnahan John Garbutt John Garbutt John Garbutt John Griffith John Griffith John Kennedy John L. Villalovos John Tran John Tran John Warren Josh Durgin Josh Durgin Josh Kearney Josh Kearney Josh Kleinpeter Joshua Harlow Joshua Harlow Joshua Hesketh Joshua McKenty Joshua McKenty Joshua McKenty Juan Manuel Olle Julien Danjou Julien Danjou Junya Noguchi Justin SB Justin Santa Barbara Justin Santa Barbara Justin Shepherd Kaitlin Farr Kashyap Chamarthy Kaushik Chandrashekar Kei Masumoto Kei masumoto Keisuke Tagami Ken Burger Ken Igarashi Ken Pepple Ken'ichi Ohmichi Ken'ichi Ohmichi Keshava Bharadwaj Kevin Bringard Kevin L. Mitchell Kevin_Zheng Kieran Spear Koji Iida Krisztian Gacsal Kun Huang Kurt Taylor Kylin CG Lajos Katona Lance Bragstad Launchpad Translations on behalf of nova-core <> Lee Yarwood Liam Kelleher Lianhao Lu LiuNanke Lorin Hochstein Lucas Alvares Gomes Ludovic Beliveau Luigi Toscano Luong Anh Tuan Lvov Maxim MORITA Kazutaka Maciej Szankin Mahesh Panchaksharaiah Mandar Vaze Marcos Lobo Marian Horban Mark Doffman Mark Goddard Mark McClain Mark McLoughlin Mark Washenberger Markus Zoeller Martin Schuppert Maru Newby Masanori Itoh Masayuki Igawa Mate Lakat Matt Dietz Matt Joyce Matt Odden Matt Riedemann Matt Riedemann Matthew Booth Matthew Edmonds Matthew Gilliard Matthew Hooker Matthew Sherborne Matthew Treinish Matthew Treinish Mauro S. M. Rodrigues Mehdi Abaakouk Melanie Witt Michael Davies Michael Gundlach Michael H Wilson Michael Kerrin Michael Krotscheck Michael Still Michael Wilson Michal Mike Bayer Mike Durnosvistov Mike Perez Mike Pittaro Mike Scherbakov Mike Spreitzer MikeG451 Mikhail Durnosvistov Mikyung Kang Mohammed Naser Monsyne Dragon Monty Taylor Morgan Fainberg Moshe Levi MotoKen Muneyuki Noguchi NTT PF Lab Nachi Ueno Nachi Ueno Naveed Massjouni Ngo Quoc Cuong Nick Bartos Nicolas Bock Nikhil Komawar Nikola Dipanov Nikolay Sokolov Nirmal Ranganathan Oleg Bondarev Ollie Leahy Ondřej Nový OpenStack Release Bot Pablo Fernando Cargnelutti Pallavi Paul Griffin Paul McMillan Paul Murray Paul Murray Pavel Kholkin Pavel Kravchenco Pawel Koniszewski Peng Yong Peter Feiner Petersingh Anburaj Phil Day Prashanth kumar reddy Pushkar Umaranikar Pádraig Brady Q.hongtao Qin Zhao Qin Zhao QingXin Meng Qiu Fossen Qiu Yu Rabi Mishra Radoslav Gerganov Rafael Folco Rafi Khardalian Rajesh Tailor Rajesh Tailor Rawan Herzallah Ray Chen Renier Morales Renuka Apte René Ribaud Ricardo Carrillo Cruz Richard Jones Rick Clark Rick Harris Rick Harris Robert Collins Robert Collins Robin Naundorf Rodolfo Alonso Hernandez Rohan Kanade Rohan Rhishikesh Kanade Rohit Karajgi Roman Bogorodskiy Roman Podoliaka Roman Podolyaka Rongze Zhu RongzeZhu Ruby Loo Rui Chen Rushi Agrawal Russell Bryant Ryan Lane Ryan Lane Ryan Lucio Ryan Moore Ryan Rossiter Ryu Ishimoto Sachi King Sahid Orentino Ferdjaoui Sahid Orentino Ferdjaoui Salvatore Orlando Sam Morrison Samuel Matzek Sandy Walsh Sandy Walsh Sarafraj Singh Sascha Peilicke Sascha Peilicke Sascha Peilicke Scott Moser Sean Chen Sean Dague Sean Dague Sean Dague Sean McGinnis Sean Mooney Sean Mooney Sergey Nikitin Sergey Skripnick Sergey Vilgelm Shane Wang ShaoHe Feng Shilla Saebi Shuangtai Tian Shuquan Huang Simon Pasquier Sirushti Murugesan Sivasathurappan Radhakrishnan Sleepsonthefloor Soren Hansen Soren Hansen Spencer Yu Stanislaw Pitucha Stephen Finucane Stephen Finucane Stephen Finucane Stephen Gran Steve Kowalik Steven Dake Steven Kaufer Sudipta Biswas Sujitha SuperStack Surya Seetharaman Sven Anderson Sylvain Bauza Takashi Kajinami Takashi Kajinami Takashi NATSUME Takashi Natsume Tetsuro Nakamura Tetsuro Nakamura Thelo Gaultier Thierry Carrez Thomas Bechtold Thomas Bechtold Thomas Goirand Thuleau Édouard Tiago Mello Tim Simpson Timofey Durakov Toan Nguyen Todd Willey Todd Willey Tom Fifield Tomofumi Hayashi Tomoki Sekiyama Tony Breeds Tracy Jones Trey Morris Trey Morris Tushar Patil Unmesh Gurjar Unmesh Gurjar Vasyl Saienko Victor Sergeyev Victor Stinner Vilobh Meshram Vishvananda Ishaya Vishvananda Ishaya Vladik Romanovsky Vladik Romanovsky Vladyslav Drok Vu Cong Tuan Vui Lam Walter A. Boring IV Wanlong Gao Wenhao Xu William Wolf William Wolf Xavier Queralt XiaojueGuan XieYingYun Yaguang Tang Yaguang Tang Yang Yu Yikun Jiang Yingxin Yufang Zhang Yuiko Takada Yun Mao Yunhong Jiang Yunhong, Jiang Yuriy Taraday Yuriy Zveryanskyy Yuuichi Fujioka Yuzlikeev Eduard ZHU ZHU Zed Shaw Zhenguo Niu Zhi Yan Liu ZhiQiang Fan ZhiQiang Fan Zhihai Song Zhiteng Huang Zhongyue Luo Zhongyue Luo abhishekkekane andrewbogott andy anguoming annegentle bhagyashris brian-lamar caoyuan chenghuiyu chenpengzi <1523688226@qq.com> chenxing chenxing dane-fichter danwent danwent@gmail.com <> deepak.mourya deepak_mourya deepak_mourya deepakmourya dineshbhor eewayhsu ericxiett esberglu fprzewozn fpxie fuzk gengjh ghanshyam ghanshyam ghanshyam grace.yu gtt116 gugug guohliu hartsocks huang.zhiping huangtianhua hussainchachuliya iswarya_vakati ivan-zhu jacky06 jakedahn jaypipes@gmail.com <> jeckxie jianghua wang jiataotj jichen jichenjc jmeridth kairoaraujo kangyufei karimull kashivreddy lawrancejing leizhang liu-sheng liupeng liusheng liyingjun liyingjun lizheming lvdongbing lzyeval masumotok matt.dietz@rackspace.com <> mdietz melanie witt melanie witt melissaml mjbright msdubov naichuans pangliye pcarlton pengyuesheng pengyuwei pkholkin pyw qinhaizhong01 qiufossen rajat29 renukaapte root root ruichen ryo.kurahashi sateesh shihanzhang smartu3 songwenping sonu.kumar stewie925 tanlin termie termie unicell vladimir.p wangdequn wanghui wangqiangbj wingwj yanpuqing yatin karel yatinkarel yugsuo yunhong jiang yuntong yuntongjin yushangbin zhang-jinnan zhang.lei zhangbailin zhangdebo zhangyangyang zhufl zhulingjie zte-hanrong Édouard Thuleau ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/CONTRIBUTING.rst0000664000175000017500000000115400000000000020277 0ustar00zuulzuul00000000000000The source repository for this project can be found at: https://opendev.org/openstack/placement Pull requests submitted through GitHub are not monitored. To start contributing to OpenStack, follow the steps in the contribution guide to set up and use Gerrit: https://docs.openstack.org/contributors/code-and-documentation/quick-start.html Bugs should be filed on launchpad: https://bugs.launchpad.net/placement/+filebug For more specific information about contributing to this repository, see the placement contributor guide: https://docs.openstack.org/placement/latest/contributor/contributing.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591510.0 openstack_placement-13.0.0/ChangeLog0000664000175000017500000134022000000000000017411 0ustar00zuulzuul00000000000000CHANGES ======= 13.0.0 ------ * Bump os-traits to 3.3.0 in requirements * Changed OS version CentOS to CentOS Stream * Adapt cmd unit test depending on python version * Add round-robin candidate generation strategy * doc: Use dnf instead of yum * Adapt tests to new messages from jsonschema 4.23.0 * Factor out allocation candidate generation strategy * Add a global limit on the number of allocation candidates * Replace deprecated FormatChecker.cls\_checks * Update gate jobs as per the 2025.1 cycle testing runtime * reno: Update master for unmaintained/2023.1 * Switch python version used for periodic jobs * Remove Python 3.8 support * requirements: Remove setuptools * Drop db migration tool * Move upper functional job to py312 * Replace py38 job by py311 job * Drop SQLALCHEMY\_WARN\_20 * Update master for stable/2024.2 12.0.0 ------ * Update 2024.2 reqs to support os-traits 3.1.0 as min version * Bump oslo.policy version to enable new RBAC by default * Update test to use service role * pre-commit: Add sphinx-lint * pre-commit: Add autopep8 * Integrate pre-commit * tox: Simplify functional testenv definitions * Add placement.wsgi.api module * Remove old excludes * reno: Update master for unmaintained/zed * Remove SQLAlchemy tips jobs * Update master for stable/2024.1 11.0.0 ------ * reno: Update master for unmaintained/xena * reno: Update master for unmaintained/wallaby * reno: Update master for unmaintained/victoria * reno: Update master for unmaintained/yoga * Add upgrade job from 2023.1 * tox: Drop envdir * Bump hacking * Update python classifier in setup.cfg * Add job to test with SQLAlchemy master (2.x) * db: Wrap raw SQL query in sqlalchemy.text * Update master for stable/2023.2 10.0.0 ------ * Update 2023.2 reqs to support os-traits 3.0.0 as min version * Fix bindep.txt for python 3.11 job(Debian Bookworm) * Fix a wrong assertion method * Changed /tmp/migrate-db.rc to /root/migrate-db.rc * tests: Warn on \*any\* SAWarning warning * tests: Use base class for all functional tests * db: Replace use of deprecated API * Bugtracker link update * Move implemented specs for Xena and Yoga release * Do not use coalesce for consumers.uuid * Update master for stable/2023.1 * Db: Drop redundant indexes for columns already having unique constraint 9.0.0 ----- * Update 2023.1 reqs to support os-traits 2.10 as min version * Modify the placement API policies defaults and scope\_type 9.0.0.0b1 --------- * Avoid rbac defaults conflict in functional tests * Make tox.ini tox 4.0.0 compatible * Policy defaults improvement spec * Switch to 2023.1 Python3 unit tests and generic template name * update bindep for ubuntu 22.04 * Update master for stable/zed 8.0.0 ----- * Make us compatible with oslo.db 12.1.0 * Remove unicode literal strings * Clarify trait filtering in the API doc * Func test for os-traits and os-resource-classes lib sync * Update placement for os-traits 2.8.0 release * disable traits count check to allow os-traits 2.8.0 * Fix typo in schema * Fix typos * Add WA about resource\_providers.can\_host removal * Update python testing as per zed cycle testing runtime * doc: Comment out language option * Update python testing as per zed cycle testing runtime * Drop lower-constraints.txt and its testing * tox: Enable SQLAlchemy 2.0 warnings * db: Use Row, not LegacyRow * tests: Restore - don't reset - warning filters * db: Remove unnecessary use of '\_mapping' * db: Use explicit transactions * db: Replace deprecated 'FromClause.select().whereclause' parameter * db: Remove use of non-integer/slice indices * db: Update 'select()' calls * db: Replace 'as\_scalar()' with 'scalar\_subquery()' * db: Replace implicit conversion of SELECT into FROM * Make perfload jobs fail if write allocation fails * Add zed spec directory * Add Python3 zed unit tests * Update master for stable/yoga * Change minversion of tox to 3.18.0 7.0.0 ----- * Add microversion 1.39 to support any-trait queries * Remove unused compatibility code * Add any-traits support for allocation candidates * Add any-traits support for listing resource providers * Extend the RP tree DB query to support any-traits * Enhance doc of \_get\_trees\_with\_traits * DB layer should only depend on trait id not names * Extend the RP db query to support any-traits * Fix perfload jobs after consumer\_types * setup: Replace dashes with underscores * tox: Remove psycopg2 warning filter * tests: Silence noisy tests * Refactor trait normalization * Extra tests around required traits * update placement for os-traits 2.7.0 release * disable traits count check to allow os-traits 2.7.0 * Updating python testing as per Yoga testing runtime * Spec: support mixing required traits with any traits * Spec: support any trait in allocation candidates * Add yoga spec directory * Use 'functional-without-sample-db-tests' tox env for placement nova job * Bump min decorator to 4.0.0 * Modify the comment that is confused * Add Python3 yoga unit tests * Update master for stable/xena 6.0.0.0rc1 ---------- * Narrow scope of set allocations database transaction * Call Engine.execute() in func tests for oslo.db 11.0.0 * Add reproducer for Project creation race bug * Fix adding 'unknown' to the ConsumerTypeCache * Reproduce 404 when allocation queried with 1.38 * Refactor consumer type methods for readability * Bump os-traits to latest 2.6.0 * Switch ConsumerType to use an AttributeCache * Microversion 1.38: API support for consumer types * Add consumer\_types migration, database and object changes * Enable HTTPProxyToWSGI middleware to find actual client ips * placement-status: check only consumers in allocation table * Fix SQL query counting the number of individual consumers having allocations by only selecting the aggregated consumer\_id column * Bump os-resource-classes requirements * Add support for RP re-parenting and orphaning * Move placement specs from nova * Fix oslo policy DeprecatedRule warnings * Bump os-resource-classes deps to 1.0.0 * [doc] Redirect people to #openstack-nova * Fix webchat link in the doc * Update doc after freenode -> OFTC move * Add periodic-stable-jobs template * Adapt to SQLAlchemy 1.4 * Add weekly jobs * Make sure the policy upgrade check get a valid config * Add 'cryptography' package to test-requirements.txt * Add a reproduction test for bug story/2008831 * Add Python3 xena unit tests * Update master for stable/wallaby * Correctly handle integrity errors on MySQL 8.x 5.0.0 ----- * Update traits in tests and requirements * Move policy deprecation to base rules * policy: Add releasenote for RBAC work * Implement secure RBAC for reshaper * policy: Add note about keystone's expansion of roles * policy: Deprecate 'admin\_api' rule * policy: Remove the deprecated 'placement' rule * Implement secure RBAC for usage * Implement secure RBAC for traits * Implement secure RBAC for resource classes * Implement secure RBAC for inventories * Implement secure RBAC for allocation candidates * Implement secure RBAC for allocations * Implement secure RBAC for aggregates * Implement secure RBAC for resource providers * policy: Don't persist default rule changes in tests * policy: Suppress policy deprecation warnings * Pass context objects to oslo.policy directly * [goal] Deprecate the JSON formatted policy file * Bump oslo.log version to 4.3.0 * Remove deprecated [placement]/policy\_file config option * Fix l-c job and move to latest hacking 4.0.0 * Remove unused test helper * Fix Placement Doc * Add functional-py3[89] tox targets * Add Python3 wallaby unit tests * Update master for stable/victoria 4.0.0 ----- * Adds py38 functional tests to gate * Bump default tox env from py37 to py38 * Correct spell error from \`seperate\` to \`separate\` * [goal] Migrate testing to ubuntu focal * Cap jsonschema 3.2.0 as the minimal version * Remove translation sections from setup.cfg * Replace assertItemsEqual with assertCountEqual * Update perfload jobs for python3 * Add DEBUG logs to help troubleshoot no allocation candidates * Update for os-traits 2.4.0 * Update verification for Python3 * Remove all usage of six library * drop mock from lower-constraints * Stop to use the \_\_future\_\_ module * Switch to newer openstackdocstheme and reno versions * Switch to new grenade job name * Use unittest.mock instead of third party mock * Add py38 package metadata * Add Python3 victoria unit tests * Update master for stable/ussuri * [Community goal] Update contributor documentation 3.0.0 ----- * Cleanup py27 support * Provide more accurate links in doc/source/user/provider-tree.rst * Update for os-traits 2.2.0 * Add check-requirements to project template * Update for os-traits 2.1.0 * Update for os-traits 2.0.0 * Remove py2 specific requirement for docs * Start README.rst with a better title * Add allocation\_conflict\_retry\_count conf setting * Drop support for python 2 * Clarify GET /allocations/$c for nonexistent $c * Update for os-traits 1.1.0 * api-ref: note GET /resource\_providers?resources amount constraints * Remove unused import statement * Add --skip-locks flag to mysql-migrate-db.sh * Fix domain name in install doc (2) * Fix domain name in install doc * Update READMEs for sample policy/config generation * Update master for stable/train 2.0.0.0rc1 ---------- * Add Train upgrade notes * Add train-prelude release note * Clean up contributor document * Clean up document's index * Update the constraints url * Fix section structure for pdf docs * Build pdf docs * Un-cap jsonschema for python3.6/3.7 support * Deprecate [placement]/policy\_file config option * Update nested-magic spec for root\_member\_of * Move nested magic spec to implemented * Update setup.cfg to include project\_urls * Fix misspell word * Fix typo in microversion sequence test * Fix links to migration scripts * Clarify the NOTE associated with ordering of middleware * Refactor exclude\_nested\_providers() * Get usages in \_build\_provider\_summaries() * Add place-held \_static dir for Sphinx 2.2.0 * api-ref: fix typo in aggregates note * Avoid duplicate ProviderSummary in \_merge\_candidates * Add a rw\_ctx.psum\_res\_by\_rp\_rc, for clarity * Use rp.id instead of uuid in \_rp\_rc\_key * Add rw\_ctx.parent\_uuid\_by\_rp\_uuid, for clarity * Add gabbits using a DeepNUMANetworkFixture * gabbi test for same\_subtree with an ancestry hole * Add tests demonstrating overlapping same\_subtreeZ * Fix allocation bug in NUMANetworkFixture * Use expanding bindparam in get\_traits\_by\_provider\_tree * Copy AllocationRequestResource only when necessary * Add \_\_copy\_\_ method to AllocationRequest{,Resource} * Correct SQL docstring on \_get\_usages\_by\_provider\_trees * Use another expanding bindparam in \_get\_usages\_by\_provider\_trees * Move provider\_ids\_from\_rp\_ids to allocation\_candidate and fix * Optimize trait creation to check existence first * Improve docs and comments for provider\_ids\_from\_rp\_ids * Remove double join in provider\_ids\_from\_rp\_ids * Clean up the extend\_usages\_by\_provider\_tree method * Trivial: Remove duplicate usage of db context * Use expanding bindparam in extend\_usages\_by\_provider\_tree * Make \_get\_trees\_with\_traits return a set * Track usage info on RequestWideSearchContext * Further optimize \_build\_provider\_summaries * Add RequestWideSearchContext.summaries\_by\_id * Add apache benchmark (ab) to end of perfload jobs * Implement a more complex nested-perfload topology * Run nested-perfload parallel correctly * Make placement base API return version without auth * Use expanding bindparam in provider\_ids\_from\_rp\_ids in\_ * Use \_\_slots\_\_ in commonly used objects * Remove ProviderIds namedtuple * \_get\_all\_by\_filters\_from\_db do not cast to list of dict * Bump os-traits minimum to 0.16.0 * Blacklist sphinx 2.1.0 (autodoc bug) * Use TraitCache for Trait.get\_by\_name * Extra gabbi tests for same\_subtree * Use integrated-gate-placement zuul template * Make a TraitCache similar to ResourceClassCache * Further simplify microversion utils * Update api-ref to point to API-SIG microversion spec * Update api-ref location * Run 'tempest-ipv6-only' job in gate * Bump os-resource-classes requirements * Extract a \_get\_schema from list\_allocation\_candidates * Move rc\_cache onto RequestContext * Make placement testing easier on osx * Simplify placement.microversion:\_fully\_qualified\_name * api-ref: Document generations * Add placement.query.missing\_value in api-ref * Add Python 3 Train unit tests * Doc \`same\_subtree\` queryparam * Add query.duplicate\_key and .bad\_value in api-ref * Follow up fix for same\_subtree documentation * Trivial: Fix docs for functions * Support \`same\_subtree\` queryparam * tox: Stop building api-ref docs with the main docs * Add whereto for testing redirect rules * Update implemented spec and spec document handling * Correct variable use and naming in mappings tests * Manage mappings on AllocationRequest object * Recreate: incorrect mappings with group\_policy=none * Fix up some inaccuracies in perfload comments and logs * Spec: Support Consumer Types * Bump os-traits minimum to 0.15.0 * Remove gate/post\_test\_hook.sh * Centralize and clarify pip in the docs * Mention OsProfiler in the testing doc * Add OsProfiler config options to generated reference * Trivial: Update document for Request IDs * Add a test for granular member\_of not flowing down * Miscellaneous doc/comment/log cleanups * Microversion 1.35: root\_required * RequestWideParams and RequestWideSearchContext * Refactor anchors\_for\_sharing\_providers * research\_context.\_get\_roots\_with\_traits() * Spec for nested magic 1 * Add support for osprofiler in wsgi * Move non-nested perfload shell commands to script * Nested provider performance testing * Update SUSE install documentation * Remove overly-verbose allocation request log * Uniquify allocation mappings * Remove a redundant test * Add missing suffix-related docstrings * Implement allocation candidate mappings * Prepare objects for allocation request mappings * Remove incomplete consumer inline migrations * Add a blocker migration for missing consumer records * Correctly limit provider summaries when nested * Add NUMANetworkFixture for gabbits * Stabilize AllocationRequest hash * perfload with written allocations * Bump os-traits to latest release (0.14.0) * Optionally run a wsgi profiler when asked * Bump os-traits requirements * Resource provider - request group mapping in allocation candidate * Bump openstackdocstheme to 1.30.0 * Reuse cache result for sharing providers capacity * Move seek providers with resource to context * Remove normalize trait map func * Cache provider ids in requested aggregates * Move search functions to the research context file * Add RequestGroupSearchContext class * Modernize CORS config and setup * Add olso.middleware.cors to conf generator * Don't run functional.db tests in nova functional run * Trivial: Fix comment for LEFT join * Use trait strings in ProviderSummary objects * Avoid traversing summaries in \_check\_traits\_for\_alloc\_request * Canary test for os-traits version * Fix typo in usage.yaml and usage-policy.yaml * Bump os-resource-classes requirements * Fixups from removing null provider protections * Remove null root provider protections * Add blocker alembic migration for null root\_provider\_ids * Change "Missing Root Provider IDs" upgrade check to a failure * Allow [a-zA-Z0-9\_-]{1,64} for request group suffix * Add 'docs' worklist to worklist table * Cap sphinx for py2 to match global requirements * Enhance debug logging in allocation candidate handling * Skip \_exclude\_nested\_providers() if not nested * Raise os-traits os-resource-classes constraints * Package db migration scripts in placement pypi dist * Skip notification sample tests when running nova functional * Run nova-tox-functional-py36 in the placement gate * Update worklist information for contributors * Remind people to use postgresql-migrate-db.sh when migrating data * Replace git.openstack.org URLs with opendev.org URLs * Dropping the py35 testing * OpenDev Migration Patch * Add links to storyboard worklists to contributing.rst * api-ref: fix formatting in member\_of param for 1.21 * Fix arg typos in contributing.rst * Remove dead code * Correct task status when a task is under review * Rename api-ref parameters * FUP on negative-aggregate-membership series * Fix a broken link in a release note * Refactor aggregate \_get\_trees\_matching\_all() * Refactor ResourceProviderListTestCase * Negative member\_of query with microversion 1.32 * Prepare for negative member queryparam 2 * Fill in the Writing Code section of contributing.rst * Fill in the New Features section of contributing.rst * Fill in reviewing section of contributing.rst * Fill in the bugs section of contributing.rst * Fix debug log getting allocation\_candidates * Remove fake resource class from fake\_ensure\_cache * s/rc\_cache.ensure\_rc\_cache/rc\_cache.ensure/ * Add initial framing for a contributing doc * Remove use of oslo.i18n and translation * Replace openstack.org git:// URLs with https:// * Initial structure for in-tree specs * Add register\_opts param to PlacementFixture * Revert "Centralize registration of logging options" * Update master for stable/stein 1.0.0.0rc1 ---------- * Group API versions by release * Flesh out the post-install verify doc * Address followups in the upgrade from nova doc * Fix bullet format from I580fa4394cb93b8e8141ee2d546543c174356a47 * Link to more info on service user and endpoints from deployment * Add prelude to release notes * Upgrade from rocky-nova docs * Update the from-pypi install instructions * Rename and restructure install docs * Centralize registration of logging options * Add oslo.log to genconfig * Slightly improve usage documentation * Update install docs for single database * Spec: Support filtering by forbidden aggregates 2 * Spec: Support filtering by forbidden aggregates 1 * Reuse common get\_providers\_with\_resource() * Prepare for negative member\_of queryparam * Add link to case studies in alloc\_cands api-ref * Document alloc-candidates-in-tree * Get rid of backslash continuations in code * Refactor tests of \_get\_trees\_matching\_all() * Note removal of OVO in contrib docs * Create ProviderTreeDBHelperTestCase * Move TestAllocation into test\_allocation * Extract user and endpoint creation install doc * Remove unused table constants from allocation\_candiate.py * Move set\_traits tests back to test\_resource\_provider * Remove ResourceProviderList class * Remove InventoryList class * Move Inventory and InventoryList to own file * Reorder classes and methods in allocation\_candidate * Move allocation candidate classes and methods * Remove the TraitList class * Update get trait and traits calls that were writers to reader * Move Trait and TraitList to own module * Clean up the intro to the REST API section * Clean up around links to database migrations scripts * Indicate existince of \`sync\_on\_startup\` option * Link to install docs from deployment overview * Remove NAME in placement/deploy.py * Make policy init more threadsafe * Be explicit about which conf is being used by policy enforcer * Rename global \_ENFORCER\_PLACEMENT to \_ENFORCER * Update docs bug links to storyboard * Update CONTRIBUTING and HACKING * Remove extraction warning from README * Update bug tracker link in README and CONTRIBUTING * Add missing ws seperator between words * Do not separately configure logging for alembic * Use sync\_on\_startup in placement-perfload job * Trivial: Update doc for \_set\_traits() * Remove the ResourceClassList class * Move ResourceClass and ResourceClassList to own module * Trivial: pull \_normalize\_trait\_map() out * Use oslo\_utils.excutils for reraise exception * Add explicit short-circuit in get\_all\_by\_consumer\_id * Stop yelling the 1.11 and 1.25 microversion history at people * Remove pep8 whitespace ignores * Fix typo in db-auto-sync release note * Make the PlacementFixture usable without intercept * Docs: extract testing info to own sub-page * Inline Consumer.increment\_generation() * Use native list for lists of Allocation * Move Allocation and AllocationList to own module * ResourceProvider.increment\_generation() * Move reshape() into placement.objects.reshaper * Make base test case file for object unit tests * Use native list for lists of Usage * Move RC\_CACHE in resource\_class\_cache * Clean up ObjectList.\_set\_objects signature * Move \*List.\_\_repr\_\_ into ObjectList * Move \_set\_objects into ObjectList * Factor listiness into an ObjectList base class * Adds debug log in allocation candidates * Refactor \_get\_trees\_matching\_all() * Retry new transaction on failure * FUPs for improve-debug-log series * Remove NOTEs about \_RE\_INV\_IN\_USE * Use set instead of list * Remove redundant second cast to int * Don't use OVO with ResourceProvider and ResourceProviderList * Cast Usage.usage to int * Test for multiple limit/group\_policy qparams * Add second shared provider to SharedStorageFixture * Optionally migrate database at service startup * in\_tree[N] alloc\_cands with microversion 1.31 * Prepare in\_tree allocation candidates * Fix a bad granular gabbi test * Add DISK\_GB to compute in SharedStorageFixture * Adds check for duplicate alloc\_cands * Don't use OVO for Inventory and InventoryList * Don't use OVO in Trait and TraitList objects * Adds tests for granular single shared rp request * Add a vision-reflection * Set timestamps in Allocation objects * Don't use OVO in ResourceClass and ResourceClassList * Don't use OVO in User object * Don't use OVO in Project object * Don't use OVO in Consumer object * Don't use OVO for Usage and UsageList * Don't use OVO for Allocation and AllocationList * Don't use OVO with allocation candidates * Update the doc in \_get\_provider\_ids\_matching() * Downgrade os-traits/os-resource-classes sync log level to DEBUG * Use one ConfigOpts in placement-status * Use tox 3.1.1 fixes * tox: Don't write byte code (maybe) * Trivial: return empty set instead of list * Also time placeload when doing perfload * Add upgrade status check for missing root ids * Use local config for placement-status CLI * Adjust database connection pool config in perfload tests * Increase loop size on \_ensure\_aggregate * Update standard resource class counts in tests * Update placement-status-checks history * Configure database api in upgrade check * Add upgrade status check for incomplete consumers * Copy create\_incomplete\_consumers online data migration from nova * Set root\_provider\_id in the database * Add online-data-migration DB commands * Placement install documentation * Add release-notes-jobs-python3 job * Trim the release notes to just stein/master * Add provider UUID to reshaper gen conflict error * Add irrelevant-files for integrated-gate-py35 jobs * Update the incorrect url 0.1.0 ----- * Correct link rest api history * Use os-resource-classes in placement * Document API error codes * Add irrelevant files list to perfload job * Add stamp DB version to the migration script * Retry \_ensure\_aggregates a limited number of times * Remove dead code in objects/resource\_provider.py * Add python3.7 unit test job * Remove writer context from \_ensure\_aggregates * Fix a format of the API version history doc * Don't create placement.conf in perfload.yaml * Update author-email in setup.cfg * Add alembic version stamp capability to the DB * Use oslo\_db fixtures * Update the goals doc to reflect non-global-config * Use a smaller base job for the perfload run * Add a perfload job * Stop using global oslo\_config * Allow placement to start without a config file * Fix typo * Remove [keystone] config options from placement * Remove keystoneauth1 opts from placement config group * Correct lower-constraints.txt and the related tox job * Start a contributor goals document * Add a doc describing a quick live environment * Add integrated-gate-py35 template to .zuul.yaml * Documentation cleanup: front page * Add assertions to verify placement-manage error output * manage: Do not use set\_defaults on parent parsers with py2 * Fix a bug tag for placement doc * Add placement-status upgrade check command * Consider root id is None in the database case * Adapt placement fixtures for external use * Add a placement-manage CLI * Add missing ws seperator between words * Remove sqlalchemy-migrate from requirements.txt * Fix comment in earlier patch * Add a document for creating DB revisions * Remove build-openstack-api-ref jobs * Delete the old migrations * Added alembic environment * Add recreate test for bug 1799892 * Harden placement init under wsgi * Make tox -ereleasenotes work * Clean up and clarify tox.ini * Add bandwidth related standard resource classes * Move ensure\_consumer to a new placement.handlers.util * fix wrong spelling of "explicit" * Fix the error package name in comment * Add a link to "Add Generation to Consumers" spec * Publish placement documents * De-nova-ify doc/source/index.rst * De-nova-ify doc/README.rst * Clean up .gitignore file * Fix genpolicy tox job * Add nova database migration script for postgresql * Use unique consumer\_id when doing online data migration * Add recreate test for bug 1798163 * Remove support for multiple database from migration.py * Remove redundant \`where\` for forbidden traits * Placement: Remove usage of get\_legacy\_facade() * Remove placement.db.migration * Remove placement.db.base * Follow up for placement usage document * Fix member\_of doc in RequestGroup.dict\_from\_request * wsgi: Always reset conf.CONF when starting the application * Add nova database migration script * Add a document for allcation candidates constraint * Fix aggregate members in nested alloc candidates * Add alloc cands test with nested and aggregates * Fix missing specifying doctrees directory * Fix link from root doc to contributor guide * Reduce max-complexity to 15 * nova.exception -> placement.exception in docstrings and comment * nova.context -> placement.context in doc strings * Sort openstack\_projects in doc conf * De-nova-ify and reformat contributor guide * Refactor: separate limiting GET /a\_c results * DRY trait/aggregate prefetch * DRY trait existence check * DRY usage and capacity SQL clauses * max-complexity=>16: refactor GET /a\_c qs parsing * Move qs parsing to placement.lib.RequestGroup * Test for missing database configuration message * Put stestr group\_regex in .stestr.conf * Add a zuul check job for coverage * Use both unit and functional for coverage testing * oslo\_config fixture in policy tests and 'placement' in policy * de-novify wsgi application to expect placement config * Rationalize and clarify database configuration * Update code and opts in conf/paths.py for placement * Make config docs build * s/placement-config-generator/config-generator/ * Link to tempest doc in tests/README.rst * Set upper bound on max-complexity in pep8 * Update the HACKING.rst file * Name arguments to \_get\_provider\_ids\_matching * Remove unused conf opts * Update and move test README.rst * Set the name of the package to openstack-placement * config: Add oslo-config-generator config * Remove multiple database scaffolding * Add logging\_error\_fixture to functional tests * Update README to warn of status * Refresh maximum version info in rest history doc * Remove uuidsentinel.py * Remove redundant reference to nova ConfFixture * Rename files to remove 'placement' * Add lower-constraints job * Use templates in .zuul.yaml * Make docs build * Add api-ref job * Rename PlacementPolicyFixture to PolicyFixture * Unify utils.py and util.py * Use uuidsentinel from oslo.utils * Update requirements and test-requirements * Fix aesthetic issues from I4974a28de541aace043504f * Removing non-existent job from tox envlist * Add python 3.6 test jobs * Make pep8 tests voting * Remove unused fixtures in placement/tests/fixtures.py * Remove placement/db/api.py * Fix line length and whitespace issues * Remove unused imports as identified by pep8 * Fix alpha-ordering of imports for pep8 * Remove placement/test.py * Make unit tests voting * Turn on logging for the request log test * Fix configuration handling in policy unit test * Trim placement/utils.py to the single method used * Make functional tests voting and gating * Use absolute import in gabbi fixture * Import placement, not nova, in rp db tests * Make a basic working DatabaseFixture * Establish an importable 'conf' package * Tidy up use of policy\_fixture * Remove unused CheatingSerializer * Use placement.uuidsentinel * Remove the PlacementFixture from fixtures * Remove more unused imports from fixtures * Remove unused db functionality and files * Remove some imports from test/fixtures that will not be used * Empty \_\_init\_\_.py files that should be empty * Correct several nova.tests and nova.tests.functional imports * Mechanically correct import of functional base class * Update i18n handling to be placement oriented * Inspect and correct tox.ini, .stestr.conf and setup.cfg * Update nova.db import paths * Replace the nova import paths with placement * Remove the import pathing for the old structure * Update the functional test import paths * Move the unit tests * Move the functional test directories * Move the placement code to the base * Remove the Nova aggregate files * Move the api-ref directories * Rename the 'nova' directories to 'placement' * Apply placement.rst change from Idf8997d5efdfdfca6 * Fix race condition in reshaper handler * Set up initial .zuul.yaml * Removed the zuul config file * (Re)start caching scheduler after starting computes in tests * [placement] Make \_ensure\_aggregate context not independent * Mention (unused) RP generation in POST /allocs/{c} * reshaper gabbit: Nix comments re doubled max\_unit * Revert "Don't use '\_TransactionContextManager.\_async'" * Don't use '\_TransactionContextManager.\_async' * Make monkey patch work in uWSGI mode * [placement] split gigantor SQL query, add logging * Make instance\_list perform per-cell batching * Document no content on POST /reshaper 204 * Fix create\_resource\_provider docstring * reshaper: Look up provider if not in inventories * [placement] Add functional test to verify presence of policy * Normalize dashless 'resource provider create' uuid * [placement] Add /reshaper handler for POST * [placement] Regex consts for placement schema * Set policy\_opt defaults in placement deploy unit test * Set policy\_opt defaults in placement gabbi fixture * Remove ChanceScheduler * Making consistent used of GiB and MiB in API ref * placement: use single-shot INSERT/DELETE agg * Add trait query to placement perf check * Add explanatory prefix to post\_test\_perf output * Remove blacklisted py3 xen tests * Add placement perf info gathering hook to end of nova-next * [placement] api-ref: Add missing aggregates example * placement: use simple code paths when possible * Test case for multiple forbidden traits * Adds a test for \_get\_provider\_ids\_matching() * placement: ignore policy scope check failures if not enforcing scope * Remove patching the mock lib * Add additional info to resource provider aggregates update API * Nix 'new in 1.19' from 1.19 sections for rp aggs * [placement] api-ref: add description for 1.29 * Add the guideline to write API reference * get provider IDs once when building summaries * [placement] Avoid rp.get\_by\_uuid in allocation\_candidates * Add explicit functional-py36 tox target * api-ref: fix min\_version for parent\_provider\_uuid in responses * [placement] Add version directives in the history doc * Use common functions in granular fixture * Define irrelevant-files for tempest-full-py3 job * Add tempest-slow job to run the tempest slow tests * Not use project table for user table * Adds a test for getting allocations API * [placement] ensure\_rc\_cache only at start of process * [placement] Move resource\_class\_cache into placement hierarchy * [placement] Debug log per granular request group * Fix nits in resource\_provider.py * Scrub hw:cpu\_model from API samples * Improve NeutronFixture and remove unncessary stubbing * tox: Ensure reused envdirs share the same deps * Fix a typo in comment in resource\_provider.py * Refactor AllocationFixture in placement test * Increase max\_unit in placement test fixture * Use common functions in NonSharedStorageFixture * Fix comments in \_anchors\_for\_sharing\_providers and related test * Ensure the order of AllocationRequestResources * Don't poison Host.\_init\_events if it's already mocked * Remove redundant join in \_anchors\_for\_sharing\_providers * [placement] Retry allocation writes server side * [placement] api-ref: add traits parameter * [placement] Use a simplified WarningsFixture * [placement] Use a non-nova log capture fixture * [placement] Use oslotest CaptureOutput fixture * [placement] Use own set\_middleware\_defaults * Add additional functional tests for NUMA networks * Add description for placement 1.26 * Fix create\_all() to replace\_all() in comments * [placement] Use base test in placement functional tests * [placement] Extract base functional test case from test\_direct * Use placement context in placement functional tests * doc: remove rocky-specific nova-scheduler min placement version * Add nova-manage placement sync\_aggregates * Add functional tests for numa-aware-vswitches * tox: Silence psycopg2 warnings * Blacklist greenlet 0.4.14 * Enhance doc to guide user to use nova user * doc: link to AZ talk from the Rocky summit * Online data migration for queued\_for\_delete flag * Rename auth\_uri to www\_authenticate\_uri * perform reshaper operations in single transaction * In Python3.7 async is a keyword [1] * [placement] disallow additional fields in allocations * [placement] cover bad content-length header * [placement] Add gabbi coverage for inv of missing rp * [placement] Add gabbi coverage for an inventory change * update tox venv env to install all requirements * Escalate UUID validation warning to error in test * Move legacy-tempest-dsvm-nova-os-vif in repo * Use ThreadPoolExecutor for max\_concurrent\_live\_migrations * Replace support matrix ext with common library * Add UUID validation for consumer\_uuid * Address nits in server group policy series * z/VM Driver: Initial change set of z/VM driver * Transform aggregate.update\_prop notification * do not assume 1 consumer in AllocList.delete\_all() * Add policy to InstanceGroup object * Add placement.concurrent\_udpate to generation pre-checks * Test for unsanitized consumer UUID * Revert "docs: Disable smartquotes" * [placement] add error.code on a ConcurrentUpdateDetected * Update some placement docs to reflect modern times * Remove unused variable in migration * Address nits from consumer generation * update project/user for consumer in allocation * Use nova.db.api directly * Update root providers in same tree * Add queued for delete to instance\_mappings table * placement: delete auto-created consumers on fail * delete consumers which no longer have allocations * make incomplete\_consumer\_project\_id a valid UUID * Refactor policies to policy in InstanceGroup DB model * Add rules column to instance\_group\_policy table * Handle compare in test\_pre\_live\_migration\_volume\_backed\* directly * Resource\_provider API handler does not return specific error codes * Use valid UUID in the placement gabbits * Update install guide for placement database configuration * move lookup of provider from \_new\_allocations() * Prevent updating an RP's parent to form a loop * Handle nested serialized json entries in assertJsonEqual * conf: Resolve Sphinx errors * Convert 'placement\_api\_docs' into a Sphinx extension * Regression test for bug 1779635 * Regression test for bug 1779818 * [placement] fix allocation handler docstring typo * Fix placement incompatible with webob 1.7 * Define common variables for irrelevant-files * Fix nits in placement-return-all-resources series * Add microversion for nested allocation candidate * Use ironic-tempest-dsvm-ipa-wholedisk-bios-agent\_ipmitool-tinyipa in tree * tox: Reuse envdirs * tox: Document and dedupe mostly everything * trivial: Remove 'tools/releasenotes\_tox.sh' * Make nova-lvm run in check on libvirt changes and compute API tests * Remove remaining legacy DB API instance\_group\* methods * Remove unused DB API instance\_group\_member\* methods * Remove unused DB API instance\_group\_delete method * [placement] demonstrate part of bug 1778591 with a gabbi test * Handle CannotDeleteParentResourceProvider to 409 Conflict * [placement] Fix capacity tracking in POST /allocations * Update scheduler to use image-traits * [placement] Add test demonstrating bug 1778743 * Fix the duplicated config options of api\_database and placement\_database * network: Rename 'create\_pci\_requests\_for\_sriov\_ports' * [placement] Demonstrate bug in consumer generation handling * Test alloc\_cands with indirectly sharing RPs * Switch to oslo\_messaging.ConfFixture.transport\_url * Adapter raise\_exc=False by default * Bump keystoneauth1 minimum to 3.9.0 * conf: Deprecate 'network\_manager' * [placement] Extract create\_allocation\_list * placement: s/None/null/ in consumer conflict msg * Cleanup nits in placement database changes * Fix nits from change Id609789ef6b4a4c745550cde80dd49cabe03869a * Add a microversion for consumer generation support * Ensure that os-traits sync is attempted only at start of process * Isolate placement database config * Optimize member\_of check for nested providers * Clarify placement DB schema migration * Nix unused raise\_if\_custom\_resource\_class\_pre\_v1\_1 * placement: Make API history doc more consistent * Return all nested providers in tree * Add osprofiler config options to generated reference * Fix retrying lower bound in requirements.txt * Optional separate database for placement API * Add certificate validation docs * [placement] Add status and links fields to version document at / * rework allocation handler \_allocations\_dict() * placement: Allocation.consumer field * Ignore UserWarning for scope checks during test runs * [placement] replace deprecated accept.best\_match * Update nova-status & docs: require placement 1.25 * XenAPI: define a new image handler to use vdi streaming * add consumers generation field * Provide a direct interface to placement * libvirt: Don't report DISK\_GB if sharing * Remove nova dependencies from test\_resource\_provider * Adjust db using allocation unit tests * Move db using provider unit tests to functional * Use oslo.messaging per-call monitoring * placement: always create consumer records * Extract part of PlacementFixture to placement * fix tox python3 overrides * Change consecutive build failure limit to a weigher * Do not use nova.test in placement.test\_deploy * Do not use nova.test in placement.test\_microversion * Do not use nova.test in placement.test\_handler * Do not use nova.test in placement.test\_fault\_wrap * Do not use nova.test in placement.test\_requestlog * Do not use nova.test in placement.handlers.test\_aggregate * Do not use nova.test in placement.test\_util * Ensure resource class cache when listing usages * api-ref: mention that you can't re-parent a resource provider * Re-base placement object unit tests on NoDBTestCase * [placement] Do not import oslo\_service for log\_options * Fix some inconsistencies in doc * Add nova-manage placement heal\_allocations CLI * mirror nova host aggregate members to placement * Set scope for remaining placement policy rules * Update overriden to overridden * Adding NVMEoF for libvirt driver * Fix doc mistakes * Remove unused function * Fix nits in nested provider allocation candidates(2) * Fix the file name of development-environment.rst * Return all resources in provider\_summaries * placement: Use INNER JOIN for requied traits * Delete duplicate functions in placement test * Use list instead of set for duplicate check * Support nested alloc cands with sharing providers * Fix nits in nested provider allocation candidates * Follow up changes to granular placement policy reviews * Add granular policy rules for allocation candidates * Add granular policy rules for placement allocations * Add granular policy rules for traits in placement * Add granular placement policy rules for aggregates * Add granular policy rules for usages * Honor availability\_zone hint via placement * Add traits check in nested provider candidates * Return nested providers in get\_by\_request * Expand tests for multiple shared resources case * Update placement upgrade docs for nova-api dependency on placement * Placement: allow to set reserved value equal to total for inventory * Update nova-status and docs for required placement 1.24 * Expose instance\_get\_all\_uuids\_by\_host() from DB API and use it * Update the deprecate os\_region\_name option * Fix inconsistency in docs * Add granular policy rules for resource providers inventories * Add granular policy rules for /resource\_classes\* * Implement granular policy rules for placement * Deduplicate config/policy reference docs from main index * Remove deprecated monkey\_patch config options * Debug logs for allocation\_candidates filters * Cleanup ugly stub in TestLocalDeleteAllocations * Add retrying to requirements.txt * [placement] default to accept of application/json when \*/\* * We don't need utils.trycmd any more * Move image conversion to privsep * Add INVENTORY\_INUSE to DELETE /rp/{u}/inventories * placement: Fix HTTP error generation * \_\_str\_\_ methods for RequestGroup, ResourceRequest * add lower-constraints job * Flexibly test keystonmiddleware in placement stack * Fix irrelevant-files in nova-dsvm-multinode-base * Add connection\_parameters to list of items copied from database * update scheduler to use image-traits * Remove support for /os-fping REST API * Address feedback from instance\_list smart-cell behavior * Remove remaning log translation in scheduler * Make get\_instance\_objects\_sorted() be smart about cells * Followup for multiple member\_of qparams support * Add tests for alloc cands with poor local disk * placement: Granular GET /allocation\_candidates * Migrate tempest-dsvm-multinode-live-migration job in-tree * Fix typos in Host aggregates documentation * placement: Object changes for granular * Use helpers in test\_resource\_provider (func) * Use test\_base symbols directly * Base test module/class for functional placement db * Deprecate the nova-consoleauth service * Remove [scheduler]/host\_manager config option * doc: Start using openstackdoctheme's extlink extension * support multiple member\_of qparams * [doc]remove nova-cert leftover in doc * Fix the request context in ServiceFixture * Get anchors for sharing providers * Remove IronicHostManager and baremetal scheduling options * Remove stale pip-missing-reqs tox test * Make service all-cells min version helper use scatter-gather * placement: resource requests for nested providers * Handle deprecation of inspect.getargspec * Bump pypowervm minimum to 1.1.15 * Address issues raised in adding member\_of to GET /a-c * xenapi: Documents update for XAPI pool shared SR migration * Remove deprecated [placement] opts * Fix link in placement contributor doc * Update docs for [keystone\_authtoken] changes since Queens * Add root and parent provider uuid to group by clause * Improve check capacity sql * tests for alloc candidates with nested and traits * Address nits in I00d29e9fd80e6b8f7ba3bbd8e82dde9d4cb1522f * Extract generate\_hostid method into utils.py * Provide framework for setting placement error codes * [placement] Support forbidden traits in API * [placement] Filter allocation candidates by forbidden traits in db * [placement] Filter resource providers by forbidden traits in db * [placement] Parse forbidden traits in query strings * Use Queens UCA for nova-multiattach job * Remove the branch specifier from the nova-multiattach job * Make the nova-multiattach job non-voting temporarily * uncap eventlet in nova * Make ResourceClass.normalize\_name handle sharp S * PowerVM: Add proc\_units\_factor conf option * Move test\_report\_client out of placement namespace * doc: add a link in the install guides about configuring neutron * [placement] Fix incorrect exception import * update\_provider\_tree devref and docstring updates * Support extending attached ScaleIO volumes * Transform aggregate.update\_metadata notification * Default to py3 for the pep8 tox env because it's stricter * Remove a outdated warning * [placement] api-ref: Fix parameters * Add tests for \_get\_trees\_matching\_all() function * Move pypowervm requirement to 1.1.12 * Use an independent transaction for \_trait\_sync * Test case: traits don't sync if first access fails * Expand member\_of functional test cases * Fix member\_of with sharing providers * Add tests for alloc\_cands with member\_of * Make generation optional in ProviderTree * SchedulerReportClient.update\_from\_provider\_tree * Complement tests in allocation candidates * trivial: Fix nits in code comments * [placement] Add test for provider summaries * Remove unnecessary code encoding specification * [placement] Add to contributor docs about handler testing * Add trusted\_certs to instance\_extra * Documentation for tenant isolation with placement * [placement] Fix bad management of \_TRAITS\_SYNCED flag * Add require\_tenant\_aggregate request filter * Add AggregateList.get\_by\_metadata() query method * Add an index on aggregate\_metadata.value * tox: Make everything work with Python 3 * Fix spelling mistake of HTTPNotFound exception * tests: fixes mock autospec usage * Fix allocation\_candidates not to ignore shared RPs * remove unnecessary short cut in placement * Fix comments in get\_all\_with\_shared() * tox: Remove unnecessary configuration * tox: Fix indentation * Updated from global requirements * Docs: modernise links * Updated from global requirements * Use microversion parse 0.2.1 * Updated from global requirements * Move placement test cases from db to placement * Remove translate and a TODO * Add more functional test for placement.usage * deprecate fping\_path config option * Add disabled field to CellMapping object * Move placement exceptions into the placement package * Add disabled column to cell\_mappings table * Add placeholder migrations for Queens backports * Updated from global requirements * conf: Remove 'db\_driver' config opt * Add 'member\_of' param to GET /allocation\_candidates * Follow the new PTI for document build * docs: Disable smartquotes * Updated from global requirements * placement: Return new provider from POST /rps * placement: generation in provider aggregate APIs * Update contributor/placement.rst to contemporary reality * Updated from global requirements * Reparent placement objects to oslo\_versionedobjects * Move resource provider objects into placement hierarchy * Move resource class fields * Updated from global requirements * New-style \_set\_inventory\_for\_provider * conf: Fix indentation of database options * conf: Remove deprecated 'allow\_instance\_snapshots' opt * Updated from global requirements * Make nova build reproducible * Migrate tempest-dsvm-cells job to an in-tree job definition * Make nova-manage db purge take --all-cells * conf: Remove 'nova.crypto' opts * ca: Remove 'nova/CA' directory * Add simple db purge command * Run post-test archive against cell1 * Removed unnecessary parantheses in yield statements * Refactor WSGI apps and utils to limit imports * Add more functional test for placement.aggregates * Updated from global requirements * Make the nova-next job voting and gating * Updated from global requirements * Updated from global requirements * Updated from global requirements * Move db MAX constants to own file * [placement] use simple FaultWrapper * Move makefs to privsep * Remove unused LOG variables * Add check for redundant import aliases * Check for leaked server resource allocations in post\_test\_hook * rp: GET /resource\_providers?required= * Clarify \`resources\` query param for /r\_p and /a\_c * [placement] api-ref: Fix a missing response code * [placement] Add functional tests for traits API * Updated from global requirements * Remove single quotes from posargs on stestr run commands * Only pull associated \*sharing\* providers * Add a nova-caching-scheduler job to the experimental queue * api-ref: Further clarify placement aggregates * Add functional tests to ensure BDM removal on delete * Drop extra loop which modifies Cinder volume status * Remove deprecated aggregate DB compatibility * Remove old flavor\_create db api method * Remove old flavor\_get\_all db api method * Remove old flavor\_get db api method * Remove old flavor\_get\_by\_name db api method * Remove old flavor\_get\_by\_flavor\_id db api method * Remove old flavor\_destroy db api method * Remove old flavor\_access\_get\_by\_flavor\_id db api method * Test websocketproxy with TLS in the nova-next job * Updated from global requirements * install-guide: Wrap long console command * install-guide: Make formatting of console consistent * Clarify the help text for [scheduler]periodic\_task\_interval * Move the nova-next job in-tree and update it * [placement] annotate loadapp as public interface * doc: merge numa.rst to cpu-topologies.rst * [placement] Add sending global request ID in get * [placement] Add sending global request ID in put (3) * Ensure resource classes correctly * [placement] Move body examples to an isolated directory * Bindep does not catch missing libpcre3-dev on Ubuntu * Remove a duplicate colon * fix link * Address comments from I51adbbdf13711e463b4d25c2ffd4a3123cd65675 * Test case: new standard resource class unusable * placement doc: Conflict caveat for DELETE APIs * [placement] Add sending global request ID in put (1) * [placement] Add sending global request ID in post * Zuul: Remove project name * Doc: Nix os-traits link from POST resource\_classes * Reset the \_RC\_CACHE between tests * doc: placement upgrade notes for queens * Add functional tests for traits-based scheduling * Migrate "launch instance" user guide docs * doc: mark the max microversions for queens * Updated from global requirements * Remove old flavor\_access\_add db api methods * Remove old flavor\_access\_remove db api method * Remove old flavor\_extra\_specs\_get db api method * Remove old flavor\_extra\_specs\_delete db api method * Remove old flavor\_access\_get\_by\_flavor\_id db api method * Fix nits in support traits changes * Log options at debug when starting API services under wsgi * set\_{aggregates|traits}\_for\_provider: tolerate set * ProviderTree.get\_provider\_uuids: Top-down ordering * SchedulerReportClient.\_delete\_provider * report client: get\_provider\_tree\_and\_ensure\_root * [Placement] Invalid query parameter could lead to HTTP 500 * Use util.validate\_query\_params in list\_traits * Add functional tests for virt driver get\_traits() method * Implement get\_traits() for the ironic virt driver * [placement] Separate API schemas (resource\_provider) * Fix invalid UUIDs in remaining tests * Add the nova-multiattach job * api-ref: provide more detail on what a provider aggregate is * Updated from global requirements * Bumping functional test job timeouts * Reduce policy deprecation warnings in test runs * Fix SUSE Install Guide: Placement port * Fix missing marker functions * Handle TZ change in iso8601 >=0.1.12 * Updated from global requirements * Fix nits in allocation candidate limit handling * [api] Allow multi-attach in compute api * placement: support traits in allocation candidates API * [placement] Add sending global request ID in delete (3) * Fix 500 in test\_resize\_server\_negative\_invalid\_state * Generalize DB conf group copying * Updated from global requirements * Cleanup redundant want\_version assignment * trivial: Remove crud from 'conf.py' * Fix openstackdocstheme options for api-ref * Updated from global requirements * [placement] Add functional tests for resource class API * correct referenced url in comments * Deduplicate aggregate notification samples * Make sure that functional test triggered on sample changes * Add taskflow to requirements * Updated from global requirements * Enable py36 unit tests in tox * Transform rescue/unrescue instance notifications * Track provider traits in report client * Update links in documents * [placement] Add sending global request ID in delete (2) * Add cross cell sort support for get\_migrations * Add reference to policy sample * Updated from global requirements * Qualify the Placement 1.15 release note * Add migration db and object pagination support * Fix OpenStack capitalization * Optionalize instance\_uuid in console\_auth\_token\_get\_valid() * Use method validate\_integer from oslo.utils * Use volume shared\_targets to lock during attach/detach * zuul: Move legacy jobs to project * Move aggregates from report client to ProviderTree * setup.cfg: Explicitly set [build\_sphinx] builder * [placement] Add sending global request ID in delete * Updated from global requirements * Add retry\_on\_deadlock decorator to action\_event\_start * [placement] Enable limiting GET /allocation\_candidates * Provide example for placement last-modified header of now * Remove extensions module * Updated from global requirements * [placement] Add x-openstack-request-id in API ref * [placement] Separate API schemas (allocation\_candidate) * [placement] Separate API schemas (allocation) * Add uuid column to BlockDeviceMapping * [placement] Separate API schemas (resource\_class) * Updated from global requirements * Make request\_spec.spec MediumText * SchedulerReportClient.\_get\_providers\_in\_aggregates * [placement] Separate API schemas (inventory) * [placement] Separate API schemas (aggregate) * [placement] Separate API schemas (trait) * [placement] Separate API schemas (usage) * Pass mountpoint to volume attachment\_create with connector * Pass mountpoint to volume attachment\_update * Update nova-status and docs for nova-compute requiring placement 1.14 * [placement] Add info about last-modified to contrib docs * [placement] Add cache headers to placement api requests * placement: skip authentication on root URI * Add instance action db and obj pagination support * Update Instance action's updated\_at when action event updated * [placement] Fix API reference for microversion 1.14 * Follow up on removing old-style quotas code * Add API and nova-manage tests that use the NoopQuotaDriver * [placement] add name to resource provider create error * [placement] Add 'Location' parameters in API ref * Implement new attach Cinder flow * SchedulerReportClient.\_get\_providers\_in\_tree * Updated from global requirements * Deprecate configurable Hide Server Address Feature * placement: adds REST API for nested providers * archive\_deleted\_instances is not atomic for insert/delete * Updated from global requirements * Fix wrong argument order in functional test * [placement] Fix an error message in API validation * [placement] Add aggregate link note in API ref * Add regression test for rebuilding a volume-backed server * Updated from global requirements * Add description for resource class creation * [placement] re-use existing conf with auth token middleware * Use ksa adapter for keystone conf & requests * [placement]Enhance doc for placement allocation list * Refactor placement version check * Remove old-style quotas code * [placement] Fix format in placement API ref * qemu-img do not use cache=none if no O\_DIRECT support * Updated from global requirements * Updated from global requirements * placement: add nested resource providers * Deprecate the IronicHostManager * Remove deprecated TrustedFilter * [placement] Fix GET PUT /allocations nits * [placement] POST /allocations to set allocations for >1 consumers * Refined fix for validating image on rebuild * Fix the format file name * Updated from global requirements * Finish stestr migration * [placement] Add 'CUSTOM\_' prefix description in API ref * [placement] Fix parameter order in placement API ref * Don't overwrite binding-profile * Update bindep.txt for doc builds * [placement] Symmetric GET and PUT /allocations/{consumer\_uuid} * Service token is not experimental * Get auth from context for glance endpoint * vgpu: add enabled white list * cleanup mapping/reqspec after archive instance * Update document related to host aggregate * Add migration\_get\_by\_uuid in db api * placement: Document request headers in api-ref * placement: Document \`in:\` prefix for ?member\_of= * Updated from global requirements * Fix docstring for GET /os-migrations and related DB API * doc: fix link to creating unit tests in contributor guide * placement: AllocCands.get\_by\_{filters => requests} * Updated from global requirements * Revert "Don't overwrite binding-profile" * Don't overwrite binding-profile * [placement] set accept to application/json if accept not set * [placement] Fix a wrong redirection in placement doc * Add Flavor.description attribute * Updated from global requirements * placement: Parse granular resources & traits * RequestGroup class for placement & consumers * conf: Validate '[api] vendordata\_providers' options * conf: Remove 'vendordata\_driver' opt * Fix warning on {'cell\_id': 1} is an invalid UUID * placement: Contributor doc microversion checklist * [placement] avoid case issues microversions in gabbits * add whereto for testing redirect rules * Use ksa adapter for placement conf & requests * Update placement api-ref: allocations link in 1.11 * rp: Remove RP.get\_traits() method * conf: Move additional nova-net opts to 'network' * trivial: Rename 'policy\_check' -> 'policy' * test: Store the OutputStreamCapture fixture * Move project\_id and user\_id to Allocation object * VGPU: Define vgpu resource class * Import user-data page from openstack-manuals * Import the config drive docs from openstack-manuals * Move the idmapshift binary into privsep * Include /resource\_providers/uuid/allocations link * Remove duplicate error info * [placement] Clean up TODOs in allocations.yaml gabbit * Move restart\_compute\_service to a common place * [placement] Confirm that empty resources query causes 400 * [placement] add coverage for update of standard resource class * Add 'done' to migration\_get\_in\_progress\_by\_host\_and\_node filter * rp: fix up AllocList.get\_by\_resource\_provider\_uuid * rp: streamline InventoryList.get\_all\_by\_rp\_uuid() * Nix bug msg from ConfGroupForServiceTypeNotFound * Updated from global requirements * Fix minor input items from previous patches * nova.utils.get\_ksa\_adapter() * Fix instance\_get\_by\_sort\_filters() for multiple sort keys * Make setenv consistent for unit, func, and api-samples * Remove doc todo related to bug/1506667 * [placement] gabbi tests for shared custom resource class * Fix CellDatabases fixture swallowing exceptions * Ensure instance can migrate when launched concurrently * [placement] Update the placement deployment instructions * Do not monkey patch eventlet in unit tests * Support qemu >= 2.10 * doc: make host aggregates examples more discoverable * Add slowest command to tox.ini * Make TestRPC inherit from the base nova TestCase * Live Migration sequence diagram * Deprecate idle\_timeout in api\_database * cleanup test-requirements * Add 400 as error code for resource class delete * fix nova accepting invalid availability zone name with ':' * Remove useless periodic task that expires quota reservations * docs: Rename cellsv2\_layout -> cellsv2-layout * Updated from global requirements * Add default configuration files to data\_files * Add fault-filling into instance\_get\_all\_by\_filters\_sort() * Add db.instance\_get\_by\_sort\_filters() * Add instance.interface\_attach notification * Updated from global requirements * doc: Split flavors docs into admin and user guides * Enable custom certificates for keystone communication * Move the dac\_admin privsep code to a new location * Updated from global requirements * doc: rename the Indices and Tables section * [placement] Unregister the ResourceProvider object * [placement] Unregister the ResourceProviderList object * [placement] Unregister the Inventory object * [placement] Unregister the InventoryList object * [placement] Unregister the Allocation object * [placement] Unregister the AllocationList object * [placement] Unregister the UsageList object * [placement] Unregister the ResourceClass object * [placement] Unregister the ResourceClassList object * [placement] Unregister the Trait object * [placement] Unregister the TraitList object * Add single quotes for posargs on jobs * Target context when setting instance to ERROR when over quota * Cleanup running of osprofiler tests * Fix test runner config issues with os-testr 1.0.0 * Fix missed chown call * Updated from global requirements * Revert "Revert "Fix AZ related API docs"" * Revert "Fix AZ related API docs" * [placement] correct error on bad resource class in allocation * api-ref: note the microversions for GET /resource\_providers query params * Fix AZ related API docs * Transform aggregate.remove\_host notification * Transform aggregate.add\_host notification * Typo error about help resource\_classes.inc * Set regex flag on ostestr command for osprofiler tests * Fix broken URLs * Allow setting up multiple cells in the base TestCase * First attempt at adding a privsep user to nova itself * doc: Add configuration index page * doc: Add user index page * Remove usage of kwarg retry\_on\_request in API * Updated from global requirements * conf: Rename two VNC options * doc: link to versioned notification samples from main index * doc: link to placement api-ref and history docs from main index * [placement] Update user doc with api-ref link * [placement] api-ref GET /traits name:startswith * [placement] Require at least one resource class in allocation * Updated from global requirements * [placement] Add test for empty resources in allocation * Add uuid online migration for migrations * Add placeholder migrations for Pike backports * Deprecate CONF.monkey\_patch * Monkey patch the blockdiag extension * docs: Document the scheduler workflow * Updated from global requirements * trivial: Remove some single use function from utils * doc: Address review comments for main index * trivial: Remove dead function, variable * Updated from global requirements * Resource tracker compatibility with Ocata and Pike * [placement] Make placement\_api\_docs.py failing * [placement] Add api-ref for allocation\_candidates * [placement] Add api-ref for RP usages * [placement] Add api-ref for usages * Add documentation for documentation contributions * doc: Import configuration reference * update policy UT fixtures * rework index intro to describe nova * doc: provide more details on scheduling with placement * Add For Operators section to front page * Create For End Users index section * Create reference subpage * Fix all >= 2 hit 404s * [placement] Add api-ref for RP allocations * Updated from global requirements * Add Contributor Guide section page * Update install guide to clearly define between package installs * doc: Import administration guide * doc: Import installation guide * doc: Start using oslo\_policy.sphinxext * doc: Start using oslo\_config.sphinxext * doc: Rework README to reflect new doc URLs * fix list rendering in aggregates * [placement] Avoid error log on 405 response * sort redirectmatch lines * add top 404 redirect * [placement] Require at least one allocation when PUT * Add redirect for api-microversion-history doc * Fix 409 handling in report client when deleting inventory * add redirects for existing broken docs urls * Add some more cellsv2 doc goodness * Test resize with placement api * Updated from global requirements * do not pass proxy env variables by tox * Add description on maximum placement API version * Updated from global requirements * Updated from global requirements * add a redirect for the old cells landing page * Remove unnecessary code * Fix example in \_serialize\_allocations\_for\_consumer * deprecate \`\`wsgi\_log\_format\`\` config variable * Updated from global requirements * Improve assertJsonEqual error reporting * Move the last\_bytes util method to libvirt * Use wsgi-intercept in OSAPIFixture * Suppress some test warnings * [placement] Use wsgi\_intercept in PlacementFixture * [placement] Flush RC\_CACHE after each gabbit sequence * Updated from global requirements * Using plain routes for the microversions test * Updated from global requirements * Updated from global requirements * Updated from global requirements * doc: Switch to openstackdocstheme * Remove the unittest for plugin framework * Use plain routes list for versions instead of stevedore * Removed unused 'wrap' property * Remove check\_detach * Remove improper LOG.exception() calls in placement * Updated from global requirements * Fix and optimize external\_events for multiple cells * Updated from global requirements * Update URL home-page in documents according to document migration * Consider instance flavor resource overrides in allocations * Use plain routes list for extension\_info instead of stevedore * Use plain routes list for os-snapshots instead of stevedore * doc: Populate the 'user' section * doc: Populate the 'reference' section * doc: Populate the 'contributor' section * doc: Populate the 'configuration' section * [placement] Add api-ref for allocations * [placement] Add api-ref for RP traits * [placement] Add api-ref for traits * Remove translation of log messages * Consistent policies * [placement] fix 500 error when allocating to bad class * [placement] Update allocation-candidates.yaml for gabbi 1.35 * [placement] cover deleting a custom resource class in use * [placement] cover deleting standard trait * Updated from global requirements * Updated from global requirements * Remove 'create\_rule\_default' * doc: Populate the 'cli' section * Add BDM to InstancePayload * doc: Enable pep8 on doc generation code * doc: Remove dead plugin * Use plain routes list for os-baremetal-nodes endpoint instead of stevedore * Use plain routes list for os-security-group-default-rules instead of stevedore * Use plain routes list for os-security-group-rules instead of stevedore * Use plain routes list for image-metadata instead of stevedore * Use plain routes list for images instead of stevedore * doc: Use consistent author, section for man pages * Use plain routes list for os-networks instead of stevedore * doc: Remove cruft from conf.py * Fix a missing classifier * Trivial: Remove unnecessary format specifier * Updated from global requirements * [placement] Improve allocation\_candidates coverage * Reset the traits sync flag in the placement fixtures * Use plain routes list for os-cells endpoint instead of stevedore * placement: support GET /allocation\_candidates * Updated from global requirements * Add scatter gather utilities for cells * Handle version for PUT and POST in PlacementFixture * Add a reset for traits DB sync * Updated from global requirements * Add python 3.5 in classifier * return 400 Bad Request when empty string resources * Add missing microversion documentation * Remove translation of log messages * placement: separate normalize\_resources\_qs\_param * Updated from global requirements * Count floating ips to check quota * Count networks to check quota * Use plain routes list for os-remote-consoles instead of stevedore * Remove multiple create from stevedore * Use plain routes list for os-tenant-networks instead of stevedore * Use plain routes list for os-cloudpipe endpoint instead of stevedore * Use plain routes list for os-quota-classes endpoint instead of stevedore * placement: Add GET /usages to placement API * placement project\_id, user\_id in PUT /allocations * Updated from global requirements * Only auto-disable new nova-compute services * Updated from global requirements * Use plain routes list for os-server-groups endpoint instead of stevedore * Use plain routes list for user\_data instead of stevedore * Use plain routes list for block\_device\_mapping instead of stevedore * Use plain routes list for os-consoles, os-console-auth-tokens endpoint instead of stevedore * [placement] Increase test coverage * [placement] Add api-ref for aggregates * [placement] Use util.extract\_json in allocations handler * [placement] Disambiguate resource provider conflict message * Remove \*\*kwargs passing in payload \_\_init\_\_ * Fix html\_last\_updated\_fmt for Python3 * Remove unused CONF import from placement/auth.py * Add service\_token for nova-glance interaction * Adopts keystoneauth with glance client * placement: use separate tables for projects/users * Use plain routes list for os-services endpoint instead of stevedore * use plain routes list for os-virtual-interfaces * use plain routes list for hypervisor endpoint instead of stevedore * Use plain routes list for os-fping endpoint * Use plain routes list for hosts endpoint instead of stevedore * Use plain routes list for instance actions endpoint * Use plain routes list for server ips endpoint * Revert "Remove Babel from requirements.txt" * Remove Babel from requirements.txt * Sync os-traits to Traits database table * Replace messaging.get\_transport with get\_rpc\_transport * Updated from global requirements * [placement] Add api-ref for resource classes * Updated from global requirements * placement: Specific error for inventory in use * Updated from global requirements * Add database migration and model for consumers * add new test fixture flavor with extra\_specs * Updated from global requirements * Use plain routes list for server diagnostics endpoint * Use plain routes list for os-server-external-events endpoint * Use plain routes list for server-migrations endpoint instead of stevedore * Use plain routes list for server-tags instead of stevedore * Use plain routes list for os-interface endpoint instead of stevedore * Updated from global requirements * [placement] adjust resource provider links by microversion * [placement] Add api-ref for DELETE resource provider * [placement] Add api-ref for PUT resource provider * [placement] Add api-ref for GET resource provider * [placement] Add api-ref for POST resource provider * [placement] Add api-ref for DELETE RP inventory * [placement] Add api-ref for PUT RP inventory * [placement] Add api-ref for GET RP inventory * [placement] Add api-ref for DELETE RP inventories * [placement] Add api-ref for PUT RP inventories * [placement] Add api-ref for GET RP inventories * Use plain routes list for os-migrations endpoint instead of stevedore * Updated from global requirements * Migrate to oslo request\_id middleware - mv 2.46 * Send request\_id on cinder calls * re-Allow adding computes with no ComputeNodes to aggregates * Exclude deleted service records when calling hypervisor statistics * [placement] Fix placement-api-ref check tool * Use plain routes list for limits endpoint instead of stevedore * Updated from global requirements * Updated from global requirements * trivial: Remove dead code * Use plain routes list for os-quota-sets endpoint instead of stevedore * Use plain routes list for os-certificates endpoint instead of stevedore * Updated from global requirements * Cache database and message queue connection objects * Fix uuid replacement in aggregate notification test * Updated from global requirements * Use plain routes list for server-password endpoint instead of stevedore * api-ref: Fix examples for add/removeFixedIp action * Updated from global requirements * Updated from global requirements * libvirt: Pass instance to connect\_volume and disconnect\_volume * Remove the can\_host column * Make NovaException format errors fatal for tests * db api: add service\_get\_by\_uuid * Add online data migration for populating services.uuid * Remove cloudpipe APIs * Use six.text\_type() when logging Instance object * Updated from global requirements * Use plain routes list for server-metadata endpoint instead of stevedore * devref and reno for nova-{api,metadata}-wsgi scripts * Add pbr-installed wsgi application for metadata api * Remove nova-cert leftovers * Use plain routes list for os-fixed-ips endpoint instead of stevedore * Use plain routes list for os-availability-zone endpoint instead of stevedore * Use plain routes list for os-assisted-volume-snapshots endpoint * Use plain routes list for os-agents endpoint instead of stevedore * Use plain routes list for os-floating-ip-dns endpoint instead of stevedore * Use plain routes list for os-floating-ips-bulk endpoint instead of stevedore * Use plain routes list for os-floating-ip-pools endpoint instead of stevedore * Use plain routes list for os-floating-ips endpoint instead of stevedore * use plain routes list for os-simple-tenant-usage * Use plain routes list for os-instance-usage-audit-log endpoint instead of stevedore * Support tag instances when boot(1) * Add ability to query for ComputeNodes by their mapped value * Updated from global requirements * Expose StandardLogging fixture for use * Remove all discoverable policy rules * Register osapi\_compute when nova-api is wsgi * Use plain routes list for '/os-aggregates' endpoint instead of stevedore * Use plain routes list for '/os-keypairs' endpoint instead of stevedore * Use plain routes list for flavors-access endpoint instead of stevedore * Use plain routes list for flavors-extraspecs endpoint instead of stevedore * Use plain routes list for flavor endpoint instead of stevedore[1] * Use plain routes list for '/servers' endpoint instead of stevedore * encryptors: Switch to os-brick encryptor classes * Updated from global requirements * Allow CONTENT\_LENGTH to be present but empty * [placement] Idempotent PUT /resource\_classes/{name} * conf: Move 'floating\_ips' opts into 'network' * Updated from global requirements * Add test ensure all the microversions are sequential in placement API * fix typos * Remove unused os-pci API * Use deepcopy when process filters in db api * Remove usage of parameter enforce\_type * Spelling error "paramenter" * Updated from global requirements * Deprecate CONF.api.allow\_instance\_snapshots * placement: Add Traits API to placement service * Remove aggregate uuid generation on load from DB * PowerVM Driver: spawn/delete #1: no-ops * Remove dead db api code * remove flake8-import-order * Updated from global requirements * Optimize the link address * Fix joins in instance\_get\_all\_by\_host * Remove the stevedore extension point for server create * Make scheduler target cells to get compute node instance info * [placement] Allow PUT and POST without bodies * Regression test for local delete with an attached volume * Switch from pip\_missing\_reqs to pip\_check\_reqs * doc: Separate the releasenotes guide from the code-review section * Updated from global requirements * Ensure reservation\_expire actually expires reservations * Rename the model object ResourceProviderTraits to ResourceProviderTrait * Short circuit notifications when not enabled * doc: Move code-review under developer policies * Updated from global requirements * Use cursive for signature verification * Add description to policies in aggregates.py * tox: Stop calling config/policy generators twice * [placement] Split api-ref topics per file * remove i18n log markers from nova.api.\* * [placement] add api-ref for GET /resource\_providers * Structure for simply managing placement-api-ref * [placement] Don't use floats in microversion handling * Fix some reST field lists in docstrings * Add check for invalid inventory amounts * Add check for invalid allocation amounts * Remove the Allocation.create() method * Tests: remove .testrepository/times.dbm in tox.ini (functional) * DELETE all inventory for a resource provider * Remove old oslo.messaging transport aliases * Updated from global requirements * flake8: Specify 'nova' as name of app * Updated from global requirements * remove flake8-import-order for test requirements * Introduce fast8 tox target * Duplicate JSON line ending check to pep8 * [placement] Raising http codes on old microversion * Updated from global requirements * doc: add some documentation around quotas * Temporarily untarget context when deleting from cell0 * api-ref: Fix parameters and examples in aggregate API * Teach os-aggregates about cells * Error message should not include SQL command * Add functional test for bad res class in set\_inventory\_for\_provider * Remove unused placement\_database config options * virt: add get\_inventory() virt driver API method * Use flake8-import-order * Add comment to instance\_destroy() * Use Sphinx 1.5 warning-is-error * Add warning on setting secure\_proxy\_ssl\_header * handle uninited fields in notification payload * Updated from global requirements * Add functional test for ip filtering with regex * Remove domains \*-log-\* from compile\_catalog * Updated from global requirements * Updated from global requirements * [placement] Add Traits related table to the api database * Tests: remove .testrepository/times.dbm in tox.ini * Updated from global requirements * Ignore deleted services in minimum version calculation * Remove usage of config option verbose * Clean up metadata param in doc * doc: Don't put comments inside toctree * Fix doc generation warnings * Updated from global requirements * More usage of ostestr and cleanup an unused dependency * Make servers API use cell-targeted context * Make CellDatabases fixture work over RPC * Revert "fix usage of opportunistic test cases with enginefacade" * Placement api: set custom json\_error\_formatter in resource\_class * Enable coverage report * Raise correct error instead of class exist in Placement API * Updated from global requirements * Remove mox from unit/api/openstack/compute/test\_aggregates.py * Fix improper prompt when update RC with existed one's name * Placement api: set custom json\_error\_formatter in root * Cleanup some issues with CONF.placement.os\_interface * Placement api: set custom json\_error\_formatter in aggregate and usage * Placement api: set custom json\_error\_formatter in resource\_provider * Fix incorrect example for querying resource for RP * Placement api: set custom json\_error\_formatter in inventory * Enable global hacking checks and removed local checks * Update hacking version * Placement api: set custom json\_error\_formatter in allocations * [3/3]Replace six.iteritems() with .items() * Removed unnecessary parantheses and fixed formation * Reserve migration placeholders for Ocata backports * conf: remove deprecated image url options * Mark compute/placement REST API max microversions for Ocata * Remove pre-cellsv2 short circuit in instance get * Allow placement endpoint interface to be set * [placement] Use modern attributes of oslo\_context * Use is\_valid\_cidr and is\_valid\_ipv6\_cidr from oslo\_utils * Updated from global requirements * Optionally make dynamic vendordata failures fatal * Use a service account to make vendordata requests * Only warn about hostmappings during ocata upgrade * Trivial-fix: replace "json" with "yaml" in policy README * Make api\_samples tests use simple cell environment * Multicell support for instance listing * Updated from global requirements * Amend the PlacementFixture * Updated from global requirements * Add a PlacementFixture * placement: create aggregate map in report client * Remove references to Python 3.4 * Move to tooz hash ring implementation * Integrate OSProfiler and Nova * Remove invalid URL in gabbi tests * Updated from global requirements * Add rudimentary CORS support to placement API * Updated from global requirements * Updated from global requirements * placement: validate member\_of values are uuids * Expose a REST API for a specific list of RPs * [py35] Fixes to get rally scenarios working * Add service\_token for nova-neutron interaction * Updated from global requirements * Add service\_token for nova-cinder interaction * XenAPI Use os-xenapi lib for nova * Document testing process for zero downtime upgrade * [2/3]Replace six.iteritems() with .items() * docs: sort the Architecture Concepts index * Make the SingleCellSimple fixture a little more comprehensive * [placement] fix typo in call to create auth middleware * HTTP interface for resource providers by aggregates * Return uuid attribute for aggregates * Move quota options to a config group * Transform aggregate.delete notification * Transform aggregate.create notification * move gate hooks to gate/ * Make test\_compute pass with CONF.use\_neutron=True by default * placement: Do not save 0-valued inventory * [placement] start a placement\_dev doc * Remove Rules.load\_json warning * Handle unicode when dealing with duplicate aggregate errors during migration * Updated from global requirements * [TrivialFix] Fix comment and function name typo error * conf: Remove 'virt' file * Removes unnecessary utf-8 encoding * Add nova-status upgrade check command framework * conf: make 'default' upper case * Updated from global requirements * Make nova-manage cell\_v2 discover\_hosts tests use DBs * Updated from global requirements * Fix some release notes in preparation for the o-2 beta release * Only return latest instance fault for instances * conf: fix formatting in base * Add Python 3.5 functional tests in tox.ini * Simple tenant usage pagination * Remove the EC2 compatible API tags filter related codes * Corrects the type of a base64 encoded string * Fix instructions for running simple\_cell\_setup * Refactor REGEX filters to eliminate 500 errors * Setup CellsV2 environment in base test * Return 400 when name is more than 255 characters * Check that all JSON files don't have \r\n in line * conf: Remove config option compute\_ manager * Add SingleCellSimple fixture * Make RPCFixture support multiple connections * Updated from global requirements * Revert "reduce pep8 requirements to just hacking" * Return 400 when name is more than 200 characters * Fix a typo in a comment in microversion history * Add a CellDatabases test fixture * Pass context as kwarg instead of positional arg to get\_engine * Require cellsv2 setup before migrating to Ocata * Fix placement API version history 1.1 title * placement: REST API for resource classes * conf: Remove deprecated service manager opts * Updated from global requirements * Always use python2.7 for docs target * libvirt: Cleanup test\_create\_configdrive * Handle maximum limit in schema for int and float type parameters * conf: Trivial fix of indentation in 'api' * hacking: Use uuidutils or uuidsentinel to generate UUID * Replace uuid4() with uuidsentinel * conf: Move api options to a group * [scheduler][tests]: Fix incorrect aggr mock values * Show team and repo badges on README * Placement api: Add informative message to 404 response * conf: remove deprecated cert\_topic option * conf: remove deprecated exception option * Remove redundant VersionedObject Fields * [placement] increase gabbi coverage of handlers.resource\_provider * [placement] increase gabbi coverage of handlers.inventory * [placement] increase gabbi coverage of handlers.allocation * Separate CRUD policy for server\_groups * Use pick\_context\_manager throughout DB APIs * Database poison note * Implement get and set aggregates in the placement API * Updated from global requirements * Typo error allocations.yaml * [placement] Enforce min\_unit, max\_unit and step\_size * Add the initial documentation for the placement API * conf: fix formatting in wsgi * Change database poison warning to an exception * Updated from global requirements * Placement api: 404 response do not indicate what was not found * Updated from global requirements * [placement] add a placement\_aggregates table to api\_db * Updated from global requirements * Add explicit dependency on testscenarios * Updated from global requirements * conf: Remove extraneous whitespace * EventReporterStub * placement: raise exc when resource class not found * encryptors: Workaround mangled passphrases * Updated from global requirements * Replace admin check with policy check in placement API * Fix import statement order * Updated from global requirements * Make build\_requests.instance MediumText * Updated from global requirements * conf: Removed TODO note and updated desc * Remove bandit.yaml in favor of defaults * Add swap volume notifications (error) * doc: Integrate oslo\_policy.sphinxpolicygen * [placement] Add support for a version\_handler decorator * Mention API V2 should no longer be used * compute: fixes python 3 related unit tests * Explicitly name commands target environments * Updated from global requirements * Tests: improve assertJsonEqual diagnostic message * Correct bug in microversion headers in placement * Updated from global requirements * Removal of tests with different result depending on testing env * Add debug to tox environment * placement: change resource class to a StringField * Remove nova/openstack/\* from .coveragerc * Remove deprecated nova-all binary * Require WebOb>=1.6.0 * hacking: Use assertIs(Not), assert(True|False) * Use more specific asserts in tests * Add quota related tables to the api database * Always use python2.7 for functional tests * placement: add new resource\_classes table * Add swap volume notifications (start, end) * Add a hacking rule for string interpolation at logging * Tests: fix a typo * conf: Group scheduler options * Updated from global requirements * Updated from global requirements * Updated from global requirements * Fix periodic-nova-py{27,35}-with-oslo-master * Use gabbi inner\_fixtures for better error capture * Updated from global requirements * [placement] reorder middleware to correct logging context * Remove stale pyc files when running the cover job * [placement] ensure that allow headers are native strings * Fix a few typos in API reference * Archive instance-related rows when the parent instance is deleted * Unwind circular import issue with api / utils * Remove context object in oslo.log method * Move notification\_format and delete rpc.py * Updated from global requirements * Cleanup some redundant USES\_DB\_SELF usage * [placement] Allow both /placement and /placement/ to work * hacking: Always use 'assertIs(Not)None' * [placement] 404 responses do not cause exception logs * Replace uuid4() with generate\_uuid() from oslo\_utils * Remove redundant str typecasting * Remove nova.image.s3 and configs * Updated from global requirements * Add placeholder DB migrations for Ocata * Remove PCI parent\_addr online migration * Make test logging setup fixture disable future setup * Add hacking checks for xrange() * Add deprecated\_since parameter * [placement] Manage log and other output in gabbi fixure * Updated from global requirements * [placement] Adjust the name of the gabbi tests * Move wsgi-intercept to test-requirements.txt * Remove default=None for config options * Updated from global requirements * Don't pass argument sqlite\_db in method set\_defaults * Update minimum requirement for netaddr * [placement] consolidate json handling in util module * Fix an error in archiving 'migrations' table * [placement] Mark HTTP error responses for translation * [placement] prevent a KeyError in webob.dec.wsgify * conf: Make list->dict conversion more specific * Revert "tox: Don't create '.pyc' files" * Improve help text for service options * [placement] functional test for report client * [placement] Correct serialization of inventory collections * [placement] make PUT inventory consistent with GET * Additional logging for placement API * [placement] cleanup some incorrect comments * Updated from global requirements * Increase BDM column in build\_requests table * Pass GENERATE\_HASHES to the tox test environment * [placement] add two ways to GET allocations * [placement] Add some tests ensuring unicode resource provider info * db: retry on deadlocks while adding an instance * [placement] Allow inventory to violate allocations * [placement] clean up some nits in the requestlog middleware * Body Verification of os-aggregates.inc * Move placement api request logging to middleware * [placement] Fix misleading comment in wsgi loader * Updated from global requirements * Add support for allocations in placement API * Add basic logging to placement api * Ignore generated merged policy files * Register keystone opts for placement sample config * Remove the incomplete wsgi script placement-api.py * rt: ensure resource provider records exist from RT * create placement API wsgi entry point * Documentation for the vendordata reboot * [placement] remove a comment that is no longer a todo * Updated from global requirements * Use StableObjectJsonFixture from o.vo * Adds nova-policy-check cmd * Reduce code complexity - api.py * Revert "Optional separate database for placement API" * In InventoryList.find() raise NotFound if invalid resource class * Updated from global requirements * Add oslopolicy script runs to the docs tox target * Add entry\_point for oslo policy scripts * List system dependencies for running common tests * Manage db sync command for cell0 * removed db\_exc.DBDuplicateEntry in bw\_usage\_update * Updated from global requirements * Add support for usages in the placement API * Add support for inventories to placement API * Improve placement API 404 and 405 response tests * Fix 'No data to report' error * In placement API send microversion header when error * placement: add filtering by attrs to resource\_providers * Add support for resource\_providers urls * Updated from global requirements * Add placement API web utility methods * Fix consistency in API conf * Improve consistency in WSGI opts * Maintain backwards compat for listen opts * Optional separate database for placement API * config options: improve help text of database (related) options (2/2) * config options: improve help text of database (related) options (1/2) * Remove hacking check [N347] for config options * List instances for secgroup without joining on rules * Updated from global requirements * Remove left over conf placeholders * Fix handling of status in placement API json\_error\_formatter * Use constraints for all tox environments * Move JSON linting to pep8 * Set enforce\_type=True in method flags * Use constraints for releasenotes * Check opt consistency for api.py * Refresh README and its docs links * Add NoopConductorFixture * Config options: base path configuration * Remove deprecated legacy\_api config options * config options: Improve help for base * Improve consistency in API * network: introduce helper APIs for dealing with os-vif objects * update wording around pep8 exceptions * Updated from global requirements * Merged barbican and key\_manager conf files into one * TrivialFix: Fixed a typo in nova/test.py * Updated from global requirements * Updated from global requirements * Add objects.ServiceList.get\_all\_computes\_by\_hv\_type * Address feedback on cell-aggregate-api-db patches * Updated from global requirements * Add data migration methods for Aggregate * Initialise oslo.privsep early in main * Aggregate create and destroy work against API db * Make Aggregate.save work with the API db * Trivial option fixes * Properly quote IPv6 address in RsyncDriver * Fixed typos in nova, nova/api, nova/cells directory * Reminder that release notes are built from commits * Add initial framing of placement API * Updated from global requirements * Remove leftover list\_opts entry points * Remove nova.cache\_utils oslo.config.opts entrypoint * Remove neutronv2.api oslo.config.opt entry point * Updated from global requirements * Make Aggregate metadata functions work with API db * Use deprecated\_reason for network quota options * New style vendordata support * Add metadata server fixture * Check opt group and type for nova.conf.service.py * Deprecate network quota configuration * Verify os-aggregates.inc on sample files * :Add missing %s in print message * Update tox.ini: Constraints are possible for api\* jobs * Make Aggregate host operations work against API db * Replace deprecated LOG.warn with LOG.warning * Add prototype feature classification matrix * Use constraints for coverage job * Remove deprecated network\_api\_class option * Remove redundant DEPRECATED tag from help messages * Add VirtualInterface.destroy() * Add block\_device\_mappings to BuildRequest * 'limit' and 'marker' support for db\_api and keypair\_obj * Don't overwrite MarkerNotFound error message * tox: Use conditional targets * tox: Don't create '.pyc' files * Add Allocation and AllocationList objects * Fix invalid import order * Hacking check for \_ENFORCER.enforce() * Hacking check for policy registration * Add a py35 environment to tox * Microversion 2.33 adds pagination support for hypervisors * Transform instance.delete notifications * Log DB exception if VIF creation fails * Add policy sample generation * \_security\_group\_get\_by\_names cleanup * Improve help text for wsgi options * Add ability to select specific tests for py34 * Remove mox from unit/compute/test\_compute.py (8) * remove personality extension * remove preserve-ephemeral rebuild extension * remove access\_ips extension * policy: Replaces 'authorize' in nova-api (part 1) * objects: adding an update method to virtual\_interface * policy: Add defaults in code (part 1) * Add console auth tokens db api methods * Port test\_pipelib and test\_policy to Python 3 * Add instance groups tables to the API database * Updated from global requirements * fix developer docs on API * remove os-disk-config part 4 * Updated from global requirements * Remove the nova.compute.resources entrypoint * Add console auth tokens table and model * Updated from global requirements * Remove mox from tests/unit/objects/test\_aggregate.py * Remove api\_rate\_limit config option * Tear down of os-disk-config part 2 * Trivial-Fix: Fix typos * Updated from global requirements * Make Aggregate.get\_by\_uuid use the API db * api-ref: parameter verification for os-aggregates * Enable all extension for all remaining sample tests * tox.ini: Remove unnecessary comments in api-ref target * Updated from global requirements * Fix update inventory for multiple providers * Improve the help text for cells options (7) * Add a get\_by\_uuid for aggregates * Remove v2 extension setting from functional tests * Make the base options definitions consistent * Revert inventory/allocation child DB linkage * add "needs:\*" tags to the config option modules * Updated from global requirements * remove /v2.1/{tenant\_id} from all urls * Updated from global requirements * Cancelled live migration are not in progress * Fix multipath iSCSI encrypted volume attach failure * Remove legacy v2 API code completely * Make AggregateList.get\_ return API & cell db items * Make Aggregate.get operation favor the API db * Add aggregates tables to the API db * Updated from global requirements * Updated from global requirements * Use oslo\_log instead of logging * Updated from global requirements * api and availablity\_zone opt definition consistent * Return 400 HTTP error for invalid flavor attributes * No disable reason defined for new services * Make available to build docs with python3 * Updated from global requirements * remove db2 support from tree * Pass OS\_DEBUG to the tox test environment * Add resource provider tables to the api database * Let setup.py compile\_catalog process all language files * use\_neutron\_default\_nets: StrOpt ->BoolOpt * Updated from global requirements * Completed migrations are not "in progress" * Make flavor-manage api call destroy with Flavor object * Updated from global requirements * Drop fields from BuildRequest object and model * config options: centralize exception options * Config options: move set default opt of db section to centralized place * Move config options from nova/api directory (5) * Make some build\_requests columns nullable * migrate to os-api-ref * config options: centralize section "database" + "api\_database" * Follow-up for the API config option patch * config options: move s3 related options * config options: centralize default flavor option * Fix migration query with unicode status * Config options: centralize cache options * Updated from global requirements * centralized conf: nova/network/rpcapi.py * Improve the help text for the API options (4) * Improve the help text for the API options (3) * Add Keypairs to the API database * Drop paramiko < 2 compat code * Correct some misspell words in nova * Improve the help text for the API options (2) * Improve the help text for the API options (1) * Move config options from nova/api directory (4) * Move config options from nova/api directory (3) * Move config options from nova/api directory (2) * Move config options from nova/api directory (1) * Remove 400 as expected error * Don't raise error when filtering on custom metadata * Add pycrypto explicitly * Config options: centralize driver libvirt options (1) * Remove legacy v2 unit tests[a-e] * Config options: Centralize servicegroup options * Archive instance\_actions and instance\_actions\_event * Updated from global requirements * Add ability to filter migrations by instance uuid * Updated from global requirements * Replace key manager with Castellan * complete Method Verification of aggregates * Config options: Centralize netconf options * Updated from global requirements * Config options: centralize section "ssl" * add tags to files for the content verification phase * Final warnings removals for api-ref * Fix sample path for aggregate, certificate, console * Updated from global requirements * Fix json response example heading in api ref * Fix "Creates an aggregate" parameters * Properly clean up BDMs when \_provision\_instances fails * move sphinx h3 to '-' instead of '^' * Initial use of microversion\_parse * Add instance/instance\_uuid to build\_requests table * Updated from global requirements * Import RST files for documentation * Fix doc build if git is absent * Updated from global requirements * Add AllServicesCurrent fixture * Drop compute node uuid online migration code * config options: centralize 'spice' options * Updated from global requirements * Config options: centralize base path configuration * remove alembic from requirements.txt * Config options: centralize section "xvp" * Updated from global requirements * db: retry instance\_info\_cache\_update() on deadlock * config options: centralize quota options * DB API changes for the nova-manage quota\_usage\_refresh command * Fix typo in compute node mega join comments * Add api-ref/build/\* to .gitignore * Config options: Centralize console options * Config options: Centralize notification options * Added server tags support in nova-api * Added db API layer to add instance tag-list filtering support * Config options: centralize "configdrive" options * config options: centralize baseproxy cli options * Config options: Centralize neutron options * Config options: Centralize ipv6 options * Remove flavor seeding from the base migration * Updated from global requirements * Improve 'monkey\_patch' conf options documentation * config options: centralize section: "crypto" * config options: Centralise 'monkeypatch' options * config options: Centralise 'utils' options * config options: Centralize upgrade\_levels section * config options: Centralize mks options * config options: Centralize vmware section * config options: centralize section "service" * config options: centralize section "guestfs" * config options: centralize section "workarounds" * config options: Centralize 'nova.rpc' options * Nuke cliutils from oslo-incubator * Updated from global requirements * Block flavor creation until main database is empty * config options: Centralise 'image\_file\_url' options * config options: centralize section: "glance" * Add Service.get\_minimum\_version\_multi() for multiple binaries * remove the ability to disable v2.1 * Make git clean actually remove covhtml * Make compute\_node\_statistics() use new schema * Config options: Centralize consoleauth options * config options: centralize section "cloudpipe" * Add sample API content * Config options: Centralize debugger options * config options: centralize section: "keymgr" * config options: centralize xenserver options * trivial: Fix alignment of wsgi options * config options: Remove 'wsgi\_' prefix from opts * Removes some redundant words * Include CellMapping in InstanceMapping object * Move config options from nova/network/manager.py * Add a DatabasePoisonFixture * config options: Move wsgi options into a group * config options: centralize section: "rdp" * Fixes hex decoding related unit tests * Config options: centralize section "hyperv" * config options: Centralise floating ip options * Add backrefs to api db models * Remove auto generated module api documentation * Add a hacking check for test method closures * Make Flavor.get operations prefer the API database * Error on API Guide warnings * Add placeholder migrations for Mitaka backports * Update .gitreview for stable/mitaka * Fix reno reverts that are still shown * config options: centralize cinder options * register the config generator default hook with the right name * Change SpawnIsSynchronous fixture return * fixed log warning in sqlalchemy/api.py * Add include\_disabled parameter to service\_get\_all\_by\_binary * Missing info\_cache.save() in db sqlalchemy api * Soft delete instance group member when delete instance * Add Database fixture to sync to a specific version * Drop the use of magic openstack project\_id * Aggregate object fixups * Add ComputeNode and Aggregate UUID operations to nova-manage online migrations * nova-manage: Declare a PciDevice online migration script * Forbid new legacy notification event\_type * Remove unused methods in nova/utils.py * Fix string interpolations at logging calls * Generate better validation error message when using name regexes * Updated from global requirements * update tests for use\_neutron=True; fix exposed bugs * Deprecate db\_driver config option * Use db connection from RequestContext during queries * Make InstanceMappings.cell\_id nullable * Added Keystone and RequestID headers to CORS middleware * Use new inventory schema in all compute\_node gets * Use new inventory schema in compute\_node\_get\_all() * Deprecate nova.hooks * Adjust resource-providers models for resource-pools * Update time is not updated when metadata of aggregate is updated * Do not use constraints for venv * Add new APIs and deprecate old API for migrations * Updated from global requirements * Add build\_requests database table and model * Make db.aggregate\_get a reader not a writer * Use constant\_time\_compare from oslo.utils * reduce pep8 requirements to just hacking * fix usage of opportunistic test cases with enginefacade * Creates flavor\* tables in API database * add a place for functional test to block specific regressions * Allocate uuids for aggregates as they are created or loaded * bug and tests in 'instance\_info\_cache' * Updated from global requirements * Fix networking exceptions in ComputeTestCase * tox: Remove 'oslo.versionedobjects' dependency * Add a column for uuid to aggregate\_hosts * Failed migration shoudn't be reported as in progress * always use python2.7 for pep8 * Hacking: check for deprecated os.popen() * Add StableObjectJsonFixture and use it in our base test class * always use pip constraints * Reorder name normalization for DNS * Updated from global requirements * Fix spelling mistake * Add methods for RequestContext to switch db connection * Config options: centralize options in conductor api * enginefacade: remove 'get\_session' and 'get\_api\_session' * Add new API to force live migration to complete * Add new DB API method to retrieve migration for instance * Updated from global requirements * enginefacade: 'flavor' * Updated from global requirements * enginefacade: test\_db\_api cleanup, missed decorators * config options: Centralise 'vnc' options * config options: centralize section "wsgi" * config options: add hacking check for help text length * Update the home-page * Switch to oslo.cache lib * Remove all remaining references to Quantum * Spread allocations of fixed ips * Updated from global requirements * Revert "Added new scheduler filter: AggregateTypeExtraSpecsAffinityFilter" * enginefacade: 'instance\_tags' * Added new scheduler filter: AggregateTypeExtraSpecsAffinityFilter * Migrate from keystoneclient to keystoneauth * Generate doc for versioned notifications * doc: add devref about versioned notifications * Updated from global requirements * Persist the request spec during an instance boot * Config options: centralize options in availability\_zones * Config options: centralize section "cells" * Use stevedore for scheduler driver * Use stevedore for scheduler host manager * enginefacade: 'instance\_group' * enginefacade: 'floating\_ip' * enginefacade: 'compute\_node' * enginefacade: 'service' * Updated from global requirements * Fix docstrings for sphinx * Add ITRI DISCO os-brick connector for libvirt * enginefacade: 'security\_group' * enginefacade: 'instance' * enginefacade: 'fixed\_ip' * enginefacade: 'quota' and 'reservation' * Python3: Replace dict.iteritems with six.iteritems * Updated from global requirements * Validate translations * enginefacade: 'ec2\_instance' and 'instance\_fault' * enginefacade: 'block\_device\_mapping' * Remove releasenotes/build between releasenotes runs * Changed filter\_by() to filter() during filtering instances in db API * config options: Centralise PCI options * Use of six.PY3 should be forward compatible * Revert "Workaround reno reverts by accepting warnings" * Workaround reno reverts by accepting warnings * Move config options from nova/cert directory * Fix undetected races when getting BDMs by volume id * Fix instance not destroyed after successful evacuation * enginefacade: 'aggregate' * hacking: check for common double word typos * update min tox version to 2.0 * nova conf single point of entry: fix error message * Remove NovaObjectDictCompat from Aggregate * single point of entry for sample config generation * Remove Deprecated EC2 and ObjectStore impl/tests * Remove null AZ tests from API tests * Updated from global requirements * Replace deprecated library function os.popen() with subprocess * Correct the code description * Stop explicitly running test discovery for py34 * introduce \`\`stub\_out\`\` method to base test class * Move Process and Mentoring pages to devref * remove use of \_get\_regexes in samples tests * config options: Centralise 'virt.hardware' options * Updated from global requirements * db: querry to retrieve all pci device by parent address * Python 3 deprecated the logger.warn method in favor of warning * enginefacade: 'instance\_metadata' * Reduce the number of db/rpc calls to get instance rules * Updated from global requirements * enginefacade: 'bw\_usage', 'vol\_usage' and 's3\_image' * Nuke EC2 API from api-paste and remove wsgi support * enginefacade: 'vif' and 'task\_log' * config options: Centralise 'virt.ironic' options * enginefacade: 'migration' * centeralized conf:compute/emphemeral\_storage\_encryption * Filter by leased=False when allocating fixed IPs * Updated from global requirements * Add placeholders for config options * Block requests 2.9.0 * Add signature\_utils module * Add uuidsentinel test module * Updated from global requirements * Deprecated tox -downloadcache option removed * Fix wrap\_exception to get all arguments for payload * Cache SecurityGroupAPI results from neutron multiplexer * [Py34] Enable api.openstack.test\_wsgi unit test * default host to service name instead of uuid * Updated from global requirements * Fix capitalization of IP * Add a note about fixing "db type could not be determined" with py34 * docs: add test strategy and feature classification * Remove SQLite BigInteger/Integer translation logic * Fixes dict keys and items references for Python 3 * add api-samples tox target * Updated from global requirements * Hyper-V: adds os-win library * Updated from global requirements * Config options: centralize section "scheduler" * Remove version from setup.cfg * force releasenotes warnings to be treated as errors * Add persistence to the RequestSpec object * Updated from global requirements * add hacking check for config options location * use graduated oslo.policy * TrivialFix: remove 'deleted' flag * Use version convert methods from oslo\_utils.versionutils * Modify Aggregate filters for RequestSpec * Fixed incorrect name of 'tag' and 'tag-any' filters * Updated from global requirements * enginefacade: 'agent' and 'action' * config options: centralize section "serial\_console" * Replaced private field in get\_session/engine with public method * Reverse sort tables before archiving * Updated from global requirements * Replaced deprecated timeutils methods * Updated from global requirements * Prepare filters for using RequestSpec object * Remove IN-predicate warnings * Fix paths for api-guide build * Don't track migrations in 'accepted' state * Replace N block\_device\_mapping queries with 1 * Add reno for release notes management * Rearranges to create new Compute API Guide * Aggregate Extra Specs Filter should return if extra\_specs is empty * Updated from global requirements * Use ObjectVersionChecker fixture from oslo.versionedobjects * Block oslo.messaging 2.8.0 * enginefacade: 'provider\_fw', 'console\_pool' and 'console' * enginefacade: 'network' * enginefacade: 'dnsdomain' and 'ec2' * enginefacade: 'certificate' and 'pci\_device' * enginefacade: 'key\_pair' and 'cell' * enginefacade: 'instance\_info' and 'instance\_extra' * Use EngineFacade from oslo\_db.enginefacade * Remove vcpu resource from extensible resource tracker * Fix booting fail when unlimited project quota * Remove "Can't resolve label reference" warnings * Remove obj\_relationships from objects * Revert "Implement online schema migrations" * Add -constraints sections for CI jobs * Updated from global requirements * Expands python34 unit tests list * Add tags to .gitignore * Updated from global requirements * Add a nova functional test for the os-server-groups GET API with all\_projects parameter * hacking check for contextlib.nested for py34 support * Print number of rows archived per table in db archive\_deleted\_rows * Updated from global requirements * Remove redundant deps in tox.ini * docs: add the scheduler evolution plans * Updated from global requirements * Ignore errorcode=4 when executing \`cryptsetup remove\` command * Omnibus stable/liberty fix * Updated from global requirements * Updated from global requirements * Add a code-review guideline document * Updated from global requirements * Make archive\_deleted\_rows\_for\_table private * Log DBReferenceError in archive\_deleted\_rows\_for\_table * Use DBReferenceError in archive\_deleted\_rows\_for\_table * Add testresources used by oslo.db fixture * Remove unused context parameter from db.archive\_deleted\_rows\* methods * Updated from global requirements * Honor until\_refresh config when creating default security group * remove sphinxcontrib-seqdiag * Add get\_minimum\_version() to Service object and DB API * Updated from global requirements * Updated from global requirements * Add Pillow to test-requirements.txt * Add Pillow to test-requirements.txt * Use os-testr for py34 tox target * Add sample config file to nova docs * Identify more py34 tests that already pass * Fix the help text of monkey\_patch config param * Filter leading/trailing spaces for name field in v2.1 compat mode * Give instance default hostname if hostname is empty * Add some devref for AZs * Change parameter name in utility function * Open Mitaka development * Change ignore-errors to ignore\_errors * Pep8 didn't check api/openstack/common.py * Updated from global requirements * Devref: Document why conductor has a task api/manager * Allow filtering using unicode characters * Updated from global requirements * Fix typo in HACKING.rst * Reuse method to convert key to passphrase * Set vif and allocated when associating fixed ip * Updated from global requirements * Updated from global requirements * Remove more 'v3' references from the code * Expose keystoneclient's session and auth plugin loading parameters * Add constraint target to tox.ini * Don't "lock" the DB on expand dry run * Update from global requirements * Don't query database with an empty list of tags for creation * Remove duplicate NullHandler test fixture * Fix permission issue of server group API * Make query to quota usage table order preserved * Rm openstack/common/versionutils from setup.cfg * Remove v3 references in unit test 'contrib' * Removed unused dependency: discover * db: Add the migration\_context to the instance\_extra table * api: deprecate the concept of extensions in v2.1 * Cleanup for merging v2 and v2.1 functional tests * Remove doc/source/api and doc/build before building docs * Updated from global requirements * Move objects registration in tests directory * Updated from global requirements * Remove merged sample tests and file for v2 tests * Updated from global requirements * Gate on nova.conf.sample generation * Add rootwrap daemon mode support * Remove the useless require\_admin\_context decorator * Remove unused db.security\_group\_rule\_get\_by\_security\_group\_grantee() * Make compute\_api.trigger\_members\_refresh() issue a single db call * nova.utils.\_get\_root\_helper() should be public * Re-write way of compare APIVersionRequest's * Remove last of the plugins/v3 from unit tests * Rename classes containing 'v3' to 'v21' * Move the v2 api\_sample functional tests * Updated from global requirements * Don't query database with an empty list of tags for IN clause * Move V2.1 API unittest to top level directory * Move legacy v2 api smaple tests * Make pagination tolerate a deleted marker * Updated from global requirements * Add hacking check for eventlet.spawn() * Updated from global requirements * Updated from global requirements * Remove 'v3' directory for v2.1 json-schemas * Move v2.1 code to the main compute directory - remove v3 step3 * Move existing V2 to legacy\_v2 - step 2 * Move existing V2 to legacy\_v2 * Add hacking check for greenthread.spawn() * Suppress not image properties for image metadata from volume * Updated from global requirements * docs: add link to liberty summit session on v2.1 API * Add documentation for the nova-cells command * Fixed incorrect behaviour of method \_check\_instance\_exists * Skip additionalProperties checks when LegacyV2CompatibleWrapper enabled * :Add documentation for the nova-idmapshift command * Remove db layer hard-code permission checks for keypair * Fix a couple dead links in docs * Updated from global requirements * Remove 'scheduled\_at' - DB cleanup * Change List objects to use obj\_relationships * Remove db layer hard-code permission checks for instance\_get\_all\_hung\_in\_rebooting * Undo tox -e docs pip install sphinx workaround * Set autodoc\_index\_modules=True so tox -e docs builds module docs again * return more details on assertJsonEqual fail * Add documentation for block device mapping * Implement compare-and-swap for instance update * docs: add a placeholder link to mentoring docs * Updated from global requirements * Move to using ovo's remotable decorators * Get py34 subunit.run test discovery to work * Enable python34 tests for nova/tests/unit/scheduler/test\*.py * Replace openssl calls with cryptography lib * Switch to using os-brick * Updated from global requirements * Added removing of tags from instance after its deletion * Scheduler: enhance debug messages for multitenancy aggregates * Updated from global requirements * tox: make it possible to run pep8 on current patch only * Switch to the oslo\_utils.fileutils * Remove unused import of the compute\_topic option from the DB API * Updated from global requirements * Remove unnecessary oslo namespace import checks * Switch to oslo.reports * docs: clear between current vs future plans * Remove db layer hard-code permission checks for fixed\_ip\_associate\_\* * Updated from global requirements * Fix Python 3 issues in nova.utils and nova.tests * Remove db layer hard-code permission checks for instance\_get\_all\_by\_host\_and\_not\_type * Revert "Remove useless db call instance\_get\_all\_hung\_in\_rebooting" * Remove db layer hard-code permission checks for provider\_fw\_rule\_\* * Remove db layer hard-code permission checks for archive\_deleted\_rows\* * Revert "Implement compare-and-swap for instance update" * Add tool to build a doc latex pdf * Update HACKING.rst for running tests and building docs * Remove db layer hard-code permission checks for quota\_class\_create/update * Remove db layer hard-code permission checks for quota\_class\_get\_all\_by\_name * Remove db layer hard-code permission checks for reservation\_expire * Use stevedore for loading monitor extensions * Switch to oslo.service library * Fix for mock-1.1.0 * Port crypto to Python 3 * Remove useless db call instance\_get\_all\_hung\_in\_rebooting * Handle KeyError when volume encryption is not supported * Implement compare-and-swap for instance update * Added method exists to the Tag object * Add DB2 support * Make evacuate leave a record for the source compute host to process * removed unused method \_get\_default\_deleted\_value * Remove flavor migration from db\_api and nova-manage * Remove unneeded OS\_TEST\_DBAPI\_ADMIN\_CONNECTION * Switch from MySQL-python to PyMySQL * Port test\_exception to Python 3 * devref: virtual machine states and transitions * Remove db layer hard-code permission checks for floating\_ip\_dns * Updated from global requirements * Add bandit for security static analysis testing * Enable python34 tests for nova/tests/unit/objects/test\*.py * Soft delete system\_metadata when destroy instance * Remove python3 specific test-requirements file * Remove db layer hard-code permission checks for network\_set\_host * Block subtractive operations in migrations for Kilo and beyond * Remove db layer hard-code permission checks for network\_disassociate * Fix Python 3 issues in nova.db.sqlalchemy * utils: ignore block device mapping in system metadata * Changes conf.py for Sphinx build because oslosphinx now contains GA * Updated from global requirements * Add explicit alembic dependency * Use oslo-config-generator instead of generate\_sample.sh * Return bandwidth usage after updating * Update version for Liberty * The devref for Nova stable API * test: add MatchType helper class as equivalent of mox.IsA * Updated from global requirements * Add Host Mapping table to API Database * Implement online schema migrations * Make instance usage audit use the brand new TaskLog object * Updated from global requirements * Associating of floating IPs corrected * Cleanup wording for the disable\_libvirt\_livesnapshot workaround option * Updated from global requirements * Send Instance object to cells instance\_update\_at\_top * fix "down" nova-compute service spuriously marked as "up" * Link to microversion history in docs * Remove db layer hard-code permission checks for quota\_usage\_update * pass environment variables of proxy to tox * Remove db layer hard-code permission checks for quota\_get\_all\_\* * Updated from global requirements * compute: only use non\_inheritable\_image\_properties if snapshotting * Replace metaclass registry with explicit opt-in registry from oslo * Begin the transition to an explicit object registry * Add a hacking rule for consistent HTTP501 message * Updated from global requirements * Handle FlavorNotFound when augmenting migrated flavors * Remove unused instance\_group\_policy db calls * Extract helper method to get image metadata from volume * Fixes referenced path in nova/doc/README.rst * Updated from global requirements * Ensure to store context in thread local after spawn/spawn\_n * Remove unit\_test doc * Make blueprints doc a reference for nova blueprints * Remove jenkins, launchpad and gerrit docs * Updated from global requirements * Make nova-manage handle completely missing flavor information * Let soft-deleted instance\_system\_metadata readable * Add missing @require\_context * Tolerate iso style timestamps for cells rpc communication * Force the value of LC\_ALL to be en\_US.UTF-8 * Remove hash seed comment from tox.ini * Allow querying for migrations by source\_compute only * Create instance\_extra entry if it doesn't update * Updated from global requirements * Block oslo.vmware 0.13.0 due to a backwards incompatible change * Fix version unit test on Python 3 * Run tests with PyMySQL on Python 3 * Drop explicit suds dependency * Replace dict.iteritems() with six.iteritems(dict) * Don't use dict.iterkeys() * Replace dict(obj.iteritems()) with dict(obj) * Use six.moves.range for Python 3 * Remove db layer hard-code permission checks for security\_group\_default\_rule\_destroy * Remove db layer hard-code permission checks for network\_associate * Remove db layer hard-code permission checks for network\_create\_safe * Remove db layer hard-code permission checks for v2.1 cells * Update docs layout * Add note to doc explaining scope * Add migration\_type and hidden to Migration database model * Fix pip-missing-reqs * Replace iter.next() with next(iter) * devref: add information to clarify nova scope * Updated from global requirements * Remove db layer hard-code permission checks for quota\_destroy\_all\_\* * Replace unicode with six.text\_type * Replace dict.itervalues() with six.itervalues(dict) * API: remove admin require from certificate\_\* from db layer * API: remove admin require for compute\_node(get\_all/search\_by\_hyperviso) from db * API: remove admin require for compute\_node\_create/update/delete from db layer * API: remove admin require from compute\_node\_get\_all\_by\_\* from db layer * Fix failure of stopping instances during init host * API: remove instance\_get\_all\_by\_host(\_and\_node) hard-code admin check from db * Remove db layer hard-code permission checks for service\_get\_by\_host\* * Remove db layer hard-code permission checks for service\_get\_by\_compute\_host * Updated from global requirements * Add SpawnFixture * Updated from global requirements * Start the conversion to oslo.versionedobjects * Cleanup docs landing page * Updated from global requirements * tests: make API signature test also check static function * Updated from global requirements * minor edit to policy\_enforcement.rst * Remove unused db.aggregate\_metadata\_get\_by\_metadata\_key() call * Removed 'PYTHONHASHSEED=0' from tox.ini * Convert bandwidth\_usage related timestamp to UTC native datetime * Add support for forcing migrate\_flavor\_data * Adds toctree to v2 section of docs * Remove db layer hard-code permission checks for fixed\_ip\_get\_\* * Remove db layer hard-code permission checks for network\_get\_all\_by\_host * Remove db layer hard-code permission checks for security\_group\_default\_rule\_create * Remove db layer hard-code permission checks for floating\_ips\_bulk * Remove downgrade support from the cellsv2 api db * Fix migrate\_flavor\_data() to catch instances with no instance\_extra rows * Updated from global requirements * Add config option to disable handling virt lifecycle events * Fix migrate\_flavor\_data() to catch instances with no instance\_extra rows * Cleanup unnecessary session creation in floating\_ip\_deallocate * Fix inefficient transaction usage in floating\_ip\_bulk\_destroy * libvirt: Add option to ssh to prevent prompting * Remove db layer hard-code permission checks for network\_get\_associated\_fixed\_ips * update .gitreview for stable/kilo * Store context in local store after spawn\_n * Open Liberty development * Use retrying decorator from oslo\_db * Fix API links and labels * Adds Compute API v2 docs * Add debug logging to quota\_reserve flow * Move suds into test-requirements.txt * Add a fixture for the NovaObject indirection API * Avoid load real policy from policy.d when using fake policy fixture * Skip 'id' attribute to be explicitly deleted in TestCase * Updated from global requirements * default tox cmd should also run 'functional' target * Rename and move the v2.1 api policy into separated files * Tox: reduce complexity level to 35 * Remove db layer hard-code permission checks for service\_get\_all * Test fixture for the api database * Remove context from remotable call signature * Added assertJsonEqual method to TestCase class * Remove usage of remotable context parameter in agent, aggregate * Remove db layer hard-code permission checks for pci * Add get\_api\_session to db api * Use the proper database engine for nova-manage * Add support for multiple database engines * Fixed archiving of deleted records * Revert "Removed useless method \_get\_default\_deleted\_value." * Remove db layer hard-code permission checks for network\_count\_reserved\_ips * refactor policy fixtures to allow use of real policy * ensure DatabaseFixture removes db on cleanup * Remove db layer hard-code permission checks for service\_get\_all\_by\_\* * V2 tests -Reuse server post req/resp sample file * Move oslo.vmware into test-requirements.txt * Remove db layer hard-code permission checks for network\_get\_by\_uuid * Refactor \_regex\_instance\_filter for testing * Add instance\_mappings table to api database * Updated from global requirements * Remove db layer hard-code permission checks for network\_get\_by\_cidr * Add cell\_mappings table to api database * Remove db layer hard-code permission checks for network\_delete\_safe * Remove db layer hard-code permission checks for flavor-manager * Remove db layer hard-code permission checks for service\_delete/service\_get * Remove db layer hard-code permission checks for service\_update * Remove db layer hard-code permission checks for flavor\_access * Modify filters so they can look to HostState * let us specify when samples tests need admin privs * Updated from global requirements * Remove service\_get\_by\_args from the DB API * Remove usage of db.service\_get\_by\_args * Fixed incorrect behavior of method sqlalchemy.api.\_check\_instance\_exists * Remove db layer hard-code permission checks for migrations\_get\* * Updated from global requirements * Truncate encoded instance sys meta to 255 or less * Allow disabling the evacuate cleanup mechanism in compute manager * Add Service.get\_by\_host\_and\_binary and ServiceList.get\_by\_binary * create noauth2 * Add second migrate\_repo for cells v2 database migrations * Updated from global requirements * Force LANGUAGE=en\_US in test runs * Remove compute\_node field from service\_get\_by\_compute\_host * Remove db layer hard-code permission checks for migration\_create/update * Disables pci plugin for v2.1 & microversions * Fix logic for checking if az can be updated * Remove TranslationFixture * Remove db layer hard-code permission checks for task\_log\_get\* * Remove db layer hard-code permission checks for task\_log\_begin/end\_task * Remove db layer hard-code permission checks for service\_create * Support specifing multiple values for aggregate keys * Remove db layer hard-code permission checks for fixed\_ip\_disassociate\_all\_by\_timeout * Switch to uuidutils from oslo\_utils library * Revert : Switch off oslo.\* namespace check temporarily * Remove db layer hard-code permission checks for v2.1 agents * Updated from global requirements * Reuse is\_int\_like from oslo\_utils * Replace select-for-update in fixed\_ip\_associate * Updated from global requirements * Remove backwards compat oslo.messaging entries from setup.cfg * Change utils.vpn\_ping() to return a Boolean * Use oslo.log * extract API fixture * Wrap IPv6 address in square brackets for scp/rsync * Added retries in 'network\_set\_host' function * Refactor how to remove compute nodes when service is deleted * Replace usage of LazyPluggable by stevedore driver * Remove computenode relationship on service\_get * Remove nested service from DB API compute\_nodes * Fix "Host Aggregate" section of the Nova Developer Guide * Remove now useless requirements wsgiref * Fixes logic in compute\_node\_statistics * Replace oslo-incubator with oslo\_context * patch out nova libvirt driver event thread in tests * Change outer to inner join in fixed IP DB API func * Small cleanup in pci\_device\_update * Drop deprecated namespace for oslo.rootwrap * Add vcpu\_model to instance object * Fix description of parameters in nova functions * Stop making the database migration backend lazy pluggable * Updated from global requirements * Improved performance of db method network\_in\_use\_on\_host * Replace select-for-update in floating\_ip\_allocate\_address * Extract preserve ephemeral on rebuild from servers plugin * Updated from global requirements * Switch off oslo.\* namespace check temporarily * Switch to using oslo\_\* instead of oslo.\* * Sync with oslo-incubator * Add \_LW for missing translations * Treat LOG.warning and LOG.warn same * Add missing setup.cfg entry for os-user-data plugin * Updated from global requirements * Add formal doc recording hypervisor feature capability matrix * Remove useless argparse requirement * Use a workarounds group option to disable live snaphots * Adds barbican keymgr wrapper * Improvement in 'network\_set\_host' function * Add migrate\_flavor\_data to nova-manage * Add flavor fields to Instance object * Use a workarounds option to disable rootwrap * Create a 'workarounds' config group * Updated from global requirements * libvirt: update uri\_whitelist in fakelibvirt.Connection * Check for LUKS device via 'isLuks' subcommand * Replace select-for-update in fixed\_ip\_associate\_pool * Remove N331 hacking rules * Revert temporary hack to monkey patch the fake rpc timeout * Remove H238 comment from tox.ini * Removed useless method \_get\_default\_deleted\_value * Updated from global requirements * HACKING.rst: Update the location of unit tests' README.rst * Ignore warnings from contextlib.nested * Cleanup bad JSON files * Added hacking rule for assertEqual(a in b, True/False) * Provide compatibliity for db.compute\_node\_statistics * Don't translate exceptions in tests * Enable check for H238 rule * Remove mox dependency * Reduce complexity of the \_get\_guest\_config method * Add flavor column to instance\_extra table * Remove useless requirements * increase fake rpc POLL\_TIMEOUT to 0.1s * Fix inconsistencies in the ComputeNode object about service * Fix wrong instructions for rebuilding API samples * Performance: leverage dict comprehension in PEP-0274 * Do not use deprecated assertRaisesRegexp() * Remove unused instance\_group\_metadata\_\* DB APIs * Reduce the complexity of the create() method * speed up tests setting fake rpc polling timeout * Updated from global requirements * Remove non existent rule N327 from HACKING.rst * Replace Hacking N315 with H105 * Enable W292 * Fix and re-gate on H306 * Move to hacking 0.10 * Updated from global requirements * Move WarningsFixture after DatabaseFixture so emit once * remove pylint source code annotations * Cleanup XML for api samples tests for Nova REST API * remove all traces of pylint testing infrastructure * Add WarningsFixture to only emit DeprecationWarning once in a test run * Added hacking rule for assertTrue/False(A in B) * Make pagination work with deleted marker * Switch to tempest-lib's packaged subunit-trace * Nuke XML support from Nova REST API - Phase 2 * Remove unused methods in nova utils * Nuke XML support from Nova REST API - Phase 1 * Don't assume contents of values after aggregate\_update * Reuse methods from netutils * Prevent new code from using namespaced oslo imports * Move metadata filtering logic to utils.py * extract RPC setup into a fixture * Remove unused db.api.dnsdomain\_list * Remove unused db.api.instance\_get\_floating\_address * Remove unused db.api.aggregate\_host\_get\_by\_metadata\_key * Remove unused db.api.get\_ec2\_instance\_id\_by\_uuid * Handle invalid sort keys/dirs gracefully * Cleanup in ResourceExtension ALIAS(v2.1api) * initialize objects with context in Aggregate object tests * Corrects link to API Reference on landing page * Reject non existent mock assert calls * Make instance\_get\_all\_\*() funtions support the smart extra.$foo columns * Updated from global requirements * objects: remove dict compat support from all XXXList() objects * objects: allow creation of objects without dict item compat * Replace stubs with mocks * Updated from global requirements * Use model\_query from oslo.db * Small cleanup in db.sqlalchemy.api.action\_finish() * Inline \_instance\_extra\_get\_by\_instance\_uuid\_query * simplify database fixture to the features we use * extract the timeout setup as a fixture * move all conf overrides to conf\_fixture * move ServiceFixture and TranslationFixture * extract fixtures from nova.test to nova.test.fixtures * move eventlet GREENDNS override to top level * Updated from global requirements * Remove unused db.api.fixed\_ip\_get\_by\_address\_detailed * Add cn\_get\_all\_by\_host and cn\_get\_by\_host\_and\_node to ComputeNode * Fix invalid read\_deleted value in \_validate\_unique\_server\_name() * Adds hacking check for api\_version decorator * rename oslo.concurrency to oslo\_concurrency * Remove needless workaround in utils module * Remove except Exception cases * Workflow documentation is now in infra-manual * Implement microversion support on api methods * Updated from global requirements * Enforce unique instance uuid in data model * Break V2 XML Support * Switch to moxstubout and mockpatch from oslotest * Optimize 'floating\_ip\_bulk\_create' function * factor out \_setup\_logging in test.py * extract \_setup\_timeouts in test.py * Port virtual-interfaces plugin to v2.1(v3) API * Port floating\_ips extension to v2.1 * Removing the headroom calculation from db layer * Change definition of API\_EXTENSION\_NAMESPACE to method * Updated from global requirements * remove test.ReplaceModule from test.py * Added db API layer to add instance tag-list filtering support * Added db API layer for CRUD operations on instance tags * Implement 'personality' plugin for V2.1 * move the integrated tests into the functional tree * Port v2 quota\_classes extension to work in v2.1(v3) framework * dummy patch to let tox functional pass * Remove Python 2.6 classifier * Make aggregate filters use objects * Enable pep8 on ./tools directory * Updated from global requirements * Replacement \`\_\` on \`\_LW\` in all LOG.warning part 1 * Replacement \`\_\` on \`\_LE\` in all LOG.exception * Replacement \`\_\` on \`\_LI\` in all LOG.info - part 2 * Replacement \`\_\` on \`\_LI\` in all LOG.info - part 1 * Port assisted-volume-snapshots extension to v2.1 * Updated from global requirements * Add debug log when over quota exception occurs * Don't modify columns\_to\_join formal parameter in \_manual\_join\_columns * Fix bulk floating ip ext to show uuid and fixed\_ip * Use session in cinderclient * Support instance\_extra fields in expected\_attrs on Instance object * Rename private functions in db.sqla.api * Updated from global requirements * Allow passing columns\_to\_join to instance\_get\_all\_by\_host\_and\_node() * GET servers API sorting compute/instance/DB updates * Remove unused db.api.floating\_ip\_set\_auto\_assigned * Remove unused db.api.flavor\_extra\_specs\_get\_item * Create instance\_extra items atomically with the instance itself * Add API schema for aggregates set\_metadata API * Add 'instance-usage-audit-log' plugin for V2.1 * Deduplicate some INFO and AUDIT level messages * move all tests to nova/tests/unit * Add tox -e functional * Drop max-complexity to 47 * Aggregate.save() shouldn't return a value * Updated from global requirements * Log sqlalchemy exception message in migration.py * Add note on running single tests to HACKING.rst * Use oslo.middleware * Switch Nova to use oslo.concurrency * remove use of explicit lockutils invocation in tests * Port security-group-default-rules extension into v2.1 * Revert "Switch Nova to use oslo.concurrency" * Updated from global requirements * DB API: Pass columns\_to\_join to instance\_get\_active\_by\_window\_joined * Drop python26 support for Kilo nova * Switch Nova to use oslo.concurrency * Split out agg multitenancy isolation unit tests * Split agg image props isolation filter unit tests * Port floating\_ip\_dns extention to v2.1 * Remove use of unicode on exceptions * Port floating\_ips\_bulk extention to v2.1 * Revert "Replace outdated oslo-incubator middleware" * Replacement \`\_\` on \`\_LE\` in all LOG.error * Porting baremetal\_nodes extension to v2.1/v3 * Port fixed\_ip extention to v2.1 * Separate filter unit tests for agg extra specs * Allow strategic loading of InstanceExtra columns * Put a cap on our cyclomatic complexity * Port os-networks-associate plugin to v2.1(v3) infrastructure * Port os-tenant-networks plugin to v2.1(v3) infrastructure * Replace outdated oslo-incubator middleware * Remove unused modules copied from oslo-incubator * Add instance\_group\_get\_by\_instance to db.api * Updated from global requirements * Port floating\_ip\_pools extention to v2.1 * Sync with latest oslo-incubator * Use database joins for fixed ips to other objects * Don't log every (friggin) migration version step during unit tests * Port os-networks plugin to v2.1(v3) infrastructure * Port cloudpipe extension to v2.1 * Break out over-quota calculation code from quota\_reserve() * Log quota refresh in\_use message at INFO level for logstash * Break out over-quota processing from quota\_reserve() * Remove baremetal virt driver * Port disk\_config extension for V2.1 * Update NoMoreFixedIps message description * Break out quota usage refresh code from quota\_reserve() * Optimize 'fixed\_ip\_bulk\_create' function * Port fping extension to work in v2.1/v3 framework * Use oslo.utils * Break out quota refresh check code from quota\_reserve() * Remove kombu as a dependency for Nova * Remove keystoneclient requirement * support TRACE\_FAILONLY env variable * remove scary error message in tox * Open Kilo development * Add @\_retry\_on\_deadlock to \_instance\_update() * Remove duplicate entry from .gitignore file * Updated from global requirements * Fix SecurityGroupExists error when booting instances * Updated from global requirements * add time to logging in unit tests * Remove unused py33 tox env * Making nova.compute.api to return Aggregate Objects * Updated from global requirements * Don't list entire module autoindex on docs index * mock.assert\_called\_once() is not a valid method * db: Add @\_retry\_on\_deadlock to service\_update() * bring over pretty\_tox.sh from tempest * Remove unused elevated context param from quota helper methods * virt: move assertPublicAPISignatures into base test class * Fix race condition in update\_dhcp * correct inverted subtraction in quota check * Updated from global requirements * Fixes network\_get\_all\_by\_host to use indexes * delete python bytecode before every test run * Stop using intersphinx * Block sqlalchemy migrate 0.9.2 as it breaks all of nova * Add quotas for Server Groups (V2 API compatibility & V2.1 support) * Add unit test to aggregate api * Remove exclude coverage regex from coverage job * Add instance\_extra\_update\_by\_uuid() to DB API * Check requirements.txt files for missing (used) requirements * Import Ironic Driver & supporting files - part 1 * Move to oslo.db * warn against sorting requirements * Allow \_poll\_bandwidth\_usage task to hit slave * Port used\_limits & used\_limits\_for\_admin into v2.1 * Change v3 aggregate API to v2.1 * Port volumes extension to work in v2.1/v3 framework * vmwareapi oslo.vmware library integration * Port limits extension to work in v2.1/v3 framework * Port image-size extension to work in v2.1/v3 framework * Port v2 image\_metadata extension to work in v2.1(v3) framework * Port v2 images extension to work in v2.1(v3) framework * cmd: add nova-serialproxy service * Changes V3 server\_actions extension into v2.1 * Adds nova-idmapshift cli utility * Decrease amount of queries while adding aggregate metadata * Add instance\_extra table and related objects * Add extension block\_device\_mapping\_v1 for v2.1 * Let update\_available\_resource hit slave * Remove concatenation with translated messages * Port simple\_tenant\_usage into v2.1 * GET servers API sorting enhancements common utilities * Add \_security\_group\_ensure\_default() DBAPI method * Remove use of str on exceptions * Updated from global requirements * Updated from global requirements * Correct seconds of a day from 84400 to 86400 * Fix sample files miss for os-aggregates * Port os-server-groups extension to work in v2.1/v3 framework * Use rfc3986 library to validate URL paths and URIs * Allow three periodic tasks to hit slave * Updated from global requirements * Remove unused db api methods * Hacking: a new hacking check was added that used an existing number * Add new db api get functions for ec2\_snapshot * Backport some v3 aggregate API unittest to v2 API * Remove metadata/metadetails from instance/server groups * docs - Set pbr 'warnerrors' option for doc build * docs - Fix errors,warnings from document generation * Optimize instance\_floating\_address\_get\_all * Standardize logging for v3 api extensions * Standardize logging for v2 api extensions * Work on document structure and doc building * Optimize db.floating\_ip\_deallocate * Updated from global requirements * Add hacking check for explicit import of \_() * Add a retry\_on\_deadlock to reservations\_expire * docs - Fix doc build errors with SQLAlchemy 0.9 * docs - Prevent eventlet exception during docs generation * docs - Add an index for the command line utilities * docs - Fix docstring issues * Add extensible resources to resource tracker (2) * Fix ImportError during docs generation * Updated from global requirements * Turn on pbr's autodoc feature * Set python hash seed to 0 in tox.ini * Stop depending on sitepackages libvirt-python * Fix FloatingIP.save() passing FixedIP object to sqlalchemy * Fix and Gate on E265 * Revert "Add extensible resources to resource tracker" * Updated from global requirements * Turn periodic tasks off in all unit tests * Updated from global requirements * Re-add H803 to flake8 ignore list * Gate on F402/pep8 * Add extensible resources to resource tracker * Cleanup and gate on hacking E711 and E712 rule * Use oslo.i18n * update ignore list for pep8 * Correctly reject request to add lists of hosts to an aggregate * Avoid possible timing attack in metadata api * Update requirements to include decorator>=3.4.0 * Cleanup and gate on hacking E713 rule * Correct exception for flavor extra spec create/update * Retry db.api.instance\_destroy on deadlock * Fix and gate on H305 and H307 * Catch InvalidAggregateAction when deleting an aggregate * Restore ability to delete aggregate metadata * Updated from global requirements * Fix more re-definitions and enable F811/F813 in gate * Make compute api use util.check\_string\_length * add get\_by\_metadata\_key to AggregateList object * Fix duplicate definitions of variables/methods * Require posix\_ipc for lockutils * Updated from global requirements * Updated from global requirements * Use default rpc\_response\_timeout in unit tests * Replace nova.utils.cpu\_count() with processutils.get\_worker\_count() * Use auth\_token from keystonemiddleware * Updated from global requirements * Add API schema for v2.1/v3 aggregates API * Fix object code direct use of other object modules * Added statement for ... else * Not count disabled compute node for statistics * Removes the use of mutables as default args * Updated from global requirements * Add bulk create/destroy functionality to FloatingIP * Cleanup and gate on pep8 rules that are stricter in hacking 0.9 * Updated from global requirements * Remove duplicate code in Objects create() function * Don't translate debug level logs in nova * Fix H401,H402 violations and re-enable gating * Bump hacking to 0.9.x series * Add testing for hooks * Revert "Remove quota-class logic from context and make unit tests pass" * Check the length of aggregate metadata * Add missing translation support * Update HACKING.rst to include N320 * Updated from global requirements * Move oslotest to test only requirements * Revert "Remove quota\_class db API calls" * Updated from global requirements * Enable flake8 F841 checking * Correct exception handling when create aggregate * Add new ec2 instance db API calls * Remove two unused db.api methods * Payload meta\_data is empty when remove metadata * Register objects in more services * Add better coverage support under tox * Add a reference to the nova developer documentation * Enforce query order for getting VIFs by instance * Fix CIDR values denoting hosts in PostgreSQL * Sync common db and db/sqlalchemy * Remove quota\_class db API calls * Ignore etc/nova/nova.conf.sample * Accurate exception info in api layer for aggregate * Add specific regexp for timestamps in v2 xml * Updated from global requirements * Remove explicit dependency on amqplib * Update links in README * Make cells use Fault obj for create * Updated from global requirements * Remove quota-class logic from context and make unit tests pass * Don't translate debug level logs in nova.cmd and nova.db * Updated from global requirements * Use strtime() specific timestamp regexp * Normalize API extension updated timestamp format * Hacking: add rule number to HACKING.rst * Fixed many typos * Don't translate debug level logs in nova.volume * Add new ec2 volume db API calls * Fix bad param name in method docstring * Use eventlet.tpool.Proxy for DB API calls * Updated from global requirements * Loosen import\_exceptions to cover all of gettextutils * Don't translate debug level scheduler logs * Remove utils.reset\_is\_neutron() to avoid races * Add specific doc build option to tox * Use one query instead of two for quota\_usages * Remove nova-clear-rabbit-queues * Add with\_compute\_node to service\_get() * Updated from global requirements * Remove duplicate code from nova.db.sqlalchemy.utils * Add lock on API layer delete floating IP * Use debug level logging in unit tests, but don't save them * Avoid the possibility of truncating disk info file * support local debug logging * Updated from global requirements * Revert "Use debug level logging during unit tests" * Nova utils: add in missing translation * Make sure leases are maintained until release * Rename instance\_actions v3 to server\_actions * Drop nova-rpc-zmq-receiver man-page * Open Juno development * Remove zmq-receiver from setup.cfg * Fix the section name in CONTRIBUTING.rst * Add nova.conf.sample to gitignore * Updated from global requirements * Persist image format to a file, to prevent attacks based on changing it * Add missing test for None in sqlalchemy query filter * Tell pip to install packages it sees globally * Bypass the database if limit=0 for server-list requests * No longer any need to pass admin context to aggregate DB API methods * Add a decorator decorator that checks func args * Updated from global requirements * Remove the nova.config.sample file * Fix equal\_any() DB API helper * Revert "Adding image multiple location support" * Revert "enable cloning for rbd-backed ephemeral disks" * Update aggregate should not allow duplicated names * Updated from global requirements * Add py27local tox target * Fix difference between mysql & psql of flavor-show * Task cleanup\_running\_deleted\_instances can now use slave * enable cloning for rbd-backed ephemeral disks * Use debug level logging during unit tests * Add os-server-external-events V3 API * No longer call check\_uptodate.sh in pep8 * Adding image multiple location support * Sync the latest DB code from oslo-incubator * Updated from global requirements * Adds get\_console\_connect\_info API * Support IPv6 when booting instances * Prevent caller from specifying id during Aggregate.create() * Enable flake8 H404 checking * Use oslo-common's logging fixture * Updated from global requirements * Updated from global requirements * Fix instance\_get\_all\_by\_host to actually use slave * Periodic task poll\_bandwidth\_usage can use slave * Adds create backup server extension for the V3 API * Fix the indents of v3 API sample docs * Replace assertEqual(None, \*) with assertIsNone in tests * Make is\_neutron() thread-safe * Fix upper bound checking for flavor create parameters * Replace oslo.sphinx with oslosphinx * Change assertTrue(isinstance()) by optimal assert * Adds migrate server extension for V3 API * Refactor stats to avoid bad join * Remove @author from copyright statements * DB: logging exceptions should use save\_and\_reraise * Remove quota classes extension from the V3 API * Renumber some nova hacking checks * Remove tox locale overrides * Removes os-instance-usage-audit-log from the V3 API * Removes os-simple-tenant-usage from the V3 API * Fix migrations changing the type of deleted column * Typo in backwards compat names for notification drivers * Support building wheels (PEP-427) * Fix misspellings in nova * Add super call to db Base class * Add hacking test to block cross-virt driver code usage * Remove vi modelines * Port to oslo.messaging * Adds suspend server extension for V3 API * Adds pause server extension for V3 API * Removes XML namespace definitions from V3 API plugins * Make fixed\_ip\_get\_by\_address() take columns\_to\_join * Refactor return value of fixed\_ip\_associate calls * Retry reservation commit and rollback on deadlock * Adds lock server extension for V3 API * Remove V3 API XML entry points * Remove v3 xml API sample tests * Finish compacting pre-Icehouse database migrations * Compact pre-Icehouse database migrations <= 210 * Compact pre-Icehouse database migrations <= 200 * Compact pre-Icehouse database migrations <= 190 * Use (# of CPUs) workers by default * Remove policy check in db layer for aggregates * Add db.dnsdomain\_get\_all() method * Updated from global requirements * Small edits on help strings * Make floating\_ip\_bulk\_destroy deallocate quota if not auto\_assigned * Add explicit discussion of dependencies to README.rst * Fix multi availability zone issue part 2 * remove redundant \_\_init\_\_() overwriting when getting ExtensionResources * Use oslo.rootwrap library instead of local copy * Calculate default security group into quota usage * Remove unused dict BYTE\_MULTIPLIERS * replace type() to isinstance() in nova * Make availability\_zone optional in create for aggregates * Enable compute\_node\_update to tolerate deadlocks * Revert "Whitelist external netaddr requirement" * Add finer granularity to host aggregate APIs * Adds new method nova.utils.get\_hash\_str * Ensure instance action event list in order * Cleanup the flake8 section of tox.ini * Whitelist external netaddr requirement * Compact pre-Icehouse database migrations <= 180 * Aggregate: Hosts isolation based on image properties * Removes disk-config extension from v3 api * Add apache2 license header to appropriate files for enabling H102 * Adds user\_data extension to nova.api.v3.extensions * Add wsgiref to requirements.txt * Make \_change\_index\_columns use existing utility methods * Fix interprocess locks when running unit-tests * Allow some instance polling periodic tasks to hit db slave * Retry on deadlock in instance\_metadata\_update * Setting the xen vm device id on vm record * Rename instance\_type to flavor in nova.utils and nova.compute.utils * Refactor time conversion helper function for objects in db api * Remove smoketests * Remove middleware ratelimits from v3 api * Require List objects to be able to backlevel their contents * Add error as not-in-progress migration status * Correct uses of :params in docstrings * Compact pre-Icehouse database migrations <= 170 * Updated from global requirements * Rename instance\_type to flavor in baremetal virt driver * Sync middleware audit, base, and notifier from oslo * Fix changes-since filter for list-servers API * Make it possible to override test timeout value * Add atomic flavor access creation * Fix monkey\_patch docstring bug * Extends V3 servers api for pci support * Compact pre-Icehouse database migrations <= 160 * Compact pre-Icehouse database migrations <= 150 * Compact pre-Icehouse database migrations <= 140 * Periodic task \_heal\_instance\_info\_cache can now use slave db * Don't overwrite marker when checking if it exists * Make check more pythonic * Delete instance faults when deleting instance * Don't gate on E125 * Fix a lazy-load exception in security\_group\_update() * Bump to sqlalchemy-migrate 0.8.2 * Use model\_query() instead of session.query in db.instance\_destroy * Periodic task \_poll\_unconfirmed\_resizes can now use slave db * Handle UnicodeEncodeError in validate\_integer * Removes os-personalities extension from the V3 API * Clean up how test env variables are parsed * Remove V3 API version of coverage extension * Rename InstanceType exceptions to Flavor * Remove used\_limits extension from the V3 API * More instance\_type -> flavor renames in db.api * Xenapi: Allow windows builds with xentools 6.1 and 6.2 * Removed unused methods from db.api * Moved quota headroom calculations into quota\_reserve * Checking existence of index before dropping * add hints to api\_samples documentation * nit: fix indentation * Refactor UnexpectedTaskStateError for handling of deleting instances * Move \`diff\_dict\` to compute API * Comments for db.api.compute\_node\_\*() methods * Add DeleteFromSelect to avoid database's limit * Include name/level in unit test log messages * Remove instance\_type\* proxy methods from nova.db.api * Make security\_group\_rule\_get\_by\_security\_group() honor columns * Nova-all: Replace basestring by six for python3 compatability * Fix tests to work with mysql+postgres concurrently * Enable extension access\_ips for v3 API * Allow \_sync\_power\_states periodic task to hit slave DB * Add nova.db.migration.db\_initial\_version() * Nova db/api.py docstring cleanups.. * Remove extra space in tox.ini * Replace basestring by six for python3 compatability * Make security\_group\_get() more flexible about joins * Make Object FieldType take an object name instead of a class * Updated from global requirements * Merging two mkfs commands * Updated from global requirements * Updates OpenStack Style Commandments link * Updated from global requirements * Adding support for multiple hypervisor versions * Fix DB API mismatch with sqlalchemy API * Add missing key attribute to AggregateList.get\_by\_host() * Use the oslo fixture module * Move exception definitions out of db api * Migrate Aggregate object to Fields * Remove obsolete redhat-eventlet.patch * Fixes typos in nova/db code * Avoid clobbering {system\_,}metadata dicts passed to instance update * Move \`utils.hash\_file\` -> \`imagecache.\_hash\_file\` * Remove \`utils.timefunc\` function * Remove \`utils.total\_seconds\` * Remove \`utils.get\_from\_path\` * Fixes several misc typos in scheduler code * Remove unused dict functions from utils * Log if a quota\_usages sync updates usage information * Open Icehouse development * Adds missing entry in setup.cfg for V3 API shelve plugin * Prefix \`utils.get\_root\_helper\` with underscore * Remove \`utils.debug\` * Remove \`utils.last\_octet\` * Remove \`utils.parse\_mailmap\` * Updated from global requirements * Remove unecessary \`get\_boolean\` function * Fixes inconsistency in flavors list with marker * Fix console db can't load attribute pool * Require oslo.config 1.2.0 final * Fix Instance object assumptions about joins * Code change for regex filter matching * Fixes modules with wrong file mode bits * Convert TestCases to NoDBTestCase * Convert TestCases to NoDBTestCase * Prune node stats at compute node delete time * Fix non-unicode string values on objects * xenapi: fix pep8 violations in nova plugins * Retry on deadlock in instance\_metadata\_delete * delete a non existent flavor extra spec returns 204 * Don't use ModelBase.save() inside of transaction * Ensure anti affinity scheduling works * Don't use sudo to discover ipv4 address * Update requirements not to boto 2.13.0 * Ignore H803 from Hacking * Add encryption support for volumes to libvirt * Don't return query from db API * Fix migration 211 to downgrade with MySQL * VMware image clone strategy settings and overrides * Clean up object comparison routines in tests * Clean up duplicated change-building code in objects * Fix compute\_node\_get\_all() for Nova Baremetal * Add missing indexes back in from 152 * Add methods to get image metadata from instance * Revert baremetal v3 API extension * Updated from global requirements * Remove indirect dependency from requirements.txt * Allow block devices without device\_name * Port to oslo.messaging.Notifier API * Add expected\_errors for extension aggregates v3 * Add missing Aggregate object tests * Generalize the \_make\_list() function for objects * Add columns\_to\_join to instance\_update\_and\_get\_original * Create mixin class for common DB fields * Add nova.utils.get\_root\_helper() * Inherit base image properties on instance creation * update neutronclient to 2.3.0 minimum * Make compute\_api use Aggregate objects * Add Aggregate object model * Port flavormanage extension to v3 API Part 2 * Add os-block-device-mapping to v3 API * xenapi: add support for auto\_disk\_config=disabled * Add support for API message localization * Port Cheetah templates to Jinja2 * Fix and gate on H302 (import only modules) * Adds V3 API samples for agents, aggregates and certificates * Adds support for security\_groups for V3 API server create * Add mock to test-requirements * Disconnect from iSCSI volume sessions after live migration * Safe db.api.compute\_node\_get\_all() performance improvement * Adds API version discovery support for V3 * Port multiple\_create extension to V3 API * Filter network by project id * Pci Device DB support * Fix error messages in v3 aggregate API * Removes V3 API images and image\_metadata extensions * Add db.block\_device\_mapping\_get\_by\_id * Fix typo in baremetal docs * Removes fixed ips extension from V3 API * Link Service.compute\_node with ComputeNode object * Fix aggregate creation/update with null or too long name * Fixes sync issue for user level resources * Fix remove\_fixed\_ip test with CastAsCall * Allow more than one ephemeral device in the DB * Add unique constraint to AggregateMetadata * Add jsonschema to Nova requirements.txt * VMware: Ensure Neutron networking works with VMware drivers * Have tox install via setup.py develop * Remove deprecated CONF.fixed\_range * Offer a paginated version of flavor\_get\_all * Upgrade to Hacking 0.7 * Fix logic in add\_host\_to\_aggregate() * Fix typo in exception message * Fix message for server name with whitespace * Demote personalities from core of API v3 as extensions os-personality * Port disk\_config API to v3 Part 2 * Fix instance\_group\_delete() DB API method * User quota update should not exceed project quota * Fix H501: Do not use locals() for string formatting * maint: remove redundant default=None for config options * Catch ldap ImportError * Merged flavor\_disabled extension into V3 core api * Merged flavorsextraspecs extension into core API * Enable no\_parent and file\_only security * Pull out instance object handling for use by create also * Fix migration downgrade 146 with mysql * Retry failed instance file deletes * Do not use context in db.sqla.api private methods * Finish DB session cleanup * Clean up session in db.sqla.api.migration\_\* methods * Clean up session in db.sqla.api.network\_\* and sec\_groups\_\* methods * Add plug-in modules for direct downloads of glance locations * Clean destroy for project quota * Clean up session in db.sqla.api.get\_ec2 methods * Clean up db.sqla.api.instance\_\* methods * Fix multi availability zone issue part 1 * Demote admin-passwd from core of API v3 as extensions os-admin-password * handle auto assigned flag on allocate floating ip * Use cached nwinfo for secgroup rules * Fix and Gate on H303 (no wildcard imports) * Port server\_usage API to v3 part 2 * Enabled hacking check for Python3 compatible print (H233) * Enabled the hacking warning for Py3 compatible octal literals (H232) * Remove fping plugin from V3 API * Use project quota as default user quota * Update references with new Mailing List location * Remove the monkey patching of \_ into the builtins * Set lock\_path in tests * Bypass queries which cause a contradiction * Add latest oslo DB support * Add note why E712 is ignored * Start using hacking 0.6 * Port migrations extension to v3 API part 2 * Per-project-user-quotas for more granularity * Add unique constraint to InstanceTypeExtraSpecs * Remove instance\_metadata\_get\_all\* from db api * Sync sample config file generator with Oslo * Allow exceptions to propagate through stevedore map * Revert "Add requests requirement capped <1.2.1." * Move \_validate\_int\_value controller func to utils * Ensure dates are dates, not strings * Use timeutils.utcnow() throughout the code * Check that the configuration file sample is up to date * Make Instance.save() handle cells DB updates * ec2-api: Disable describing of instances using deleted tags as filter * port BaremetalNodes API into v3 part2 * Move resource usage sync functions to db backend * Remove locals() from various places * Support scoped keys in aggregate extra specs filter * Port AttachInterfaces API to v3 Part 2 * Port used limits extension to v3 API Part 2 * Porting os-aggregates extensions to API v3 Part 2 * Porting os-aggregates extensions to API v3 Part 1 * Porting server metadata core API to API v3 Part 2 * Port limits core API to API-v3 Part 2 * Fix filtering aggregate metadata by key * remove python-glanceclient cap * Fix IPAddress and CIDR type decorators * Port user\_data API to v3 Part 2 * Port flavor\_rxtx extension to v3 API Part 2 * Fix aggregate\_get\_by\_host host filtering * xenapi:populating hypervisor version in host state * Port deferredDelete API to v3 Part 2 * Port instance\_actions API to v3 Part 2 * Prompt error message when creating aggregate without aggregate name * Port AvailabilityZone API to v3 Part 2 * Port service API to v3 Part 2 * Port hide srvr addresses extension to v3 API Pt2 * Port extended status extension to v3 API Part 2 * Port os-console-output extension to API v3 Part 2 * Make db/api strip timezones for datetimes * update Quantum usage to Neutron * Fix aggregate update * Port extended-availability-zone API into v3 part2 * Add unique constraints to AggregateHost * Port server password extension to v3 API Part 2 * Add -U to the command line for pip * Change force\_dhcp\_release default to True * port Host API into v3 part2 * Port admin-actions API into v3 part2 * Fix issue with pip installing oslo.config-1.2.0 * Properly pin pbr and d2to1 in setup.py * Add Instance.get\_by\_id() query method * Port images metadata functionality to v3 API Part 2 * Add unique constraint to ConsolePool * Add "ExtendedVolumes" API extension * Port multinic extension to v3 API Part 2 * Port security groups extension to v3 API Part 2 * Fix info\_cache and bw\_usage update race * Change db.api.instance\_type\_ to db.api.flavor\_ * Add unique constraint to AgentBuild * Ensure flake8 tests run on all api code * Port extended-server-attributes API into v3 part2 * List migrations through Admin API * Port fping extension to v3 API Part 2 * Allow filters to only run once per request if their data is static * Fix formatting errors in documentation * Use oslo.sphinx and remove local copy of doc theme * Add unique constraints to Service * Add unique constraint to FixedIp * Change unique constraint in VirtualInterface * Fix and gate on E125 * Make flavors is\_public option actually work * Make instance\_update() string-convert IP addresses * Port agent API to v3 Part 2 * Add unique constraints to Quota * Port scheduler hints extension to v3 API Part 2 * Port hypervisor API into v3 part2 * port Instance\_usage\_audit\_log API into v3 part2 * Add unique constraint for security groups * Add HACKING check for db session param * Port coverage API into v3 part2 * Clean up and make HACKING.rst DRYer * Fix binding of SQL query params in DB utils * Port quota classes extension to v3 API Part 2 * Port server\_diagnostics extension to v3 API Part2 * Port images functionality to v3 API Part 2 * Port cells extension to v3 API Part 2 * Port consoles extension API into v3 part2 * Session cleanup for db.security\_group\_\* methods * Port config\_drive API to v3 Part 2 * Fix metadata access in prep for instance objects * Fix typo for instance\_get\_all\_by\_filters() function * Port flavor\_disabled extension to v3 API Part 2 * Fix sqlalchemy utils * Port flavor\_access extension to v3 API Part 2 * Port Simple\_tenant\_usage API to v3 Part 2 * Better default for my\_ip if 8.8.8.8 is unreachable * db.compute\_node\_update: ignore values['update\_at'] * Port quota API into v3 part2 * Update pyparsing to 1.5.7 * Refactor db.security\_group\_get() instance join behavior * Port missing bits from httplib2 to requests * Retry quota\_reserve on DBDeadlock * Revert "Add oslo-config-1.2.0a2 and pbr>=0.5.16 to requirements." * Add oslo-config-1.2.0a2 and pbr>=0.5.16 to requirements * Remove usage of locals() for formatting from nova.scheduler.\* * Remove db session hack from conductor's vol\_usage\_update() * Add unique constraints to Cell * Accept is\_public=None when listing all flavors * Add missing tests for cell\_\* methods * Enforce sqlite-specific flow in drop\_unique\_constraint * Remove unused cert db method * Session cleanup for db.security\_group\_rule\_\* methods * Organize limits units and per-units constants * Replace utils.to\_bytes() with strutils.to\_bytes() * Backup and restore object registry for tests * Port flavors core API to v3 tree * Remove trivial cases of unused variables (1) * Port certificates API to v3 Part 2 * Fix and enable H403 tests * Update to the latest stevedore * Add missing tests for nova.db.api.network\_\* * Cells: Add support for global cinder * Remove explicit distribute depend * Use an inner join on aggregate\_hosts in aggregate\_get\_by\_host * Nova instance group DB support * Replace functions in utils with oslo.fileutils * Refactors get\_instance\_security\_groups to only use instance\_uuid * DB migration to the new BDM data format * Enhance group handling in extract\_opts * Use InstanceList object for init\_host * Sending volume IO usage broken * Replace openstack-common with oslo in HACKING.rst * Port evacuate API to v3 Part 2 * Speeding up scheduler tests * Port rescue API to v3 Part 2 * Alphabetize v3 API extension entry point list * Add missing exception to cell\_update() * Import osapi\_v3/enabled option in nova/test * More detailed log in failing aggregate extra filter * Call scheduler for run\_instance from conductor * Fix a race where a soft deleted instance might be removed by mistake * Delete unused bin directory * Fix postgresql failures related to Data type * hardcode pbr and d2to1 versions * Adds ability to black/whitelist v3 API extensions * Add notes about how doc generation works * python3: Add py33 to tox.ini * Improve Python 3.x compatibility * Ports consoles API to v3 API * Fixed two minor docs niggles * Adds v3 API disable config option * Ports ips api to v3 API * Cosmetic fix to parameter name in DB API * Removed session from reservation\_create() * Raise exception instances not exception classes * Don't delete sys\_meta on instance delete * Fix volume IO usage notifications been sent too often * Fix \_drop\_unique\_constraint\_in\_sqlite() function * Add update method of security group name and description * Add posargs support to flake8 call * Enumerate Flake8 E12x ignores * Fix and enable flake8 F823 * Fix and enable flake8 F812 * In utils.tempdir, pass CONF.tempdir as an argument * Enumerate Flake8 Fxxx ignores * Enable flake8 E721 * API Extensions framework for v3 API Part 2 * Change db \`deleted\` column type utils * Fix tests for sqlalchemy utils * Moved sample network creation out of unittest base class constructor * Make a few places tolerant of sys\_meta being a dict * Remove locals() from scheduler filters * Rename requires files to standard names * Removed session from fixed\_ip\_\*() functions * Raise AgentBuildNotFound on updating/destroying deleted object * Don't update API cell on get\_nwinfo * Fix error in instance\_get\_all\_by\_filters() use of soft\_deleted filter * Fix require\_context() decorators * Use strict=True instead of \`is\_valid\_boolstr\` * Editable default quota support * Remove usage of locals() for formatting from nova.api.\* * Switch to flake8+hacking * Don't update DB records for unchanged stats * Mox should cleanup before stubs * Add missing tests for db.fixed\_ip\_\*(). functions * Cells: Don't allow active -> build * Use Oslo's \`bool\_from\_string\` * Hide lock\_prefix argument using synchronized\_with\_prefix() * Move get\_table() from test\_migrations to sqlalchemy.utils * API extensions framework for v3 API * Record smoketest dependency on gFlags * Add pointer to compute driver matrix wiki page * Update rootwrap with code from oslo * Remove invalid block\_device\_mapping volume\_size of '' * Add sqlalchemy migration utils.create\_shadow\_table method * Add sqlalchemy migration utils.check\_shadow\_table method * Optimize db.instance\_floating\_address\_get\_all method * Session cleanup for db.floating\_ip\_\* methods * Optimize instance queries in compute manager * Convert to using newly imported processutils * Transition from openstack.common.setup to pbr * Remove security\_group\_handler * Sync oslo-incubator print statement changes * Convert to using oslo's execute() method * Remove race condition (in FloatingIps) * Add missing tests for db.floating\_ip\_\* methods * Delete InstanceSystemMetadata on instance deletion * Volume IO usage gets reset to 0 after a reboot / crash * Add the availability\_zone to the volume.usage notifications * Performance optimization for contrib.flavorextraspecs * Refactor work with db.instance\_type\_\* methods * Fix bug in db.instance\_type\_destroy * Move db.instance\_type\_extra\_specs\_\* to db.instance\_type\_\* methods * Fix fixed\_ip\_count\_by\_project in DB API * Add option to exclude joins from instance\_get\_by\_uuid * Import and convert to oslo loopingcall * Remove orphaned db method instance\_test\_and\_set * Allow listing fixed\_ips for a given compute host * Don't join metadata twice in instance\_get\_all() * Optimize some of the periodic task database queries in n-cpu * Change DB API instance functions for selective metadata fetching * Replace metadata joins with another query * Remove unnecessary LOG initialisation * Add tenant/ user id to volume usage notifications * Import eventlet in \_\_init\_\_.py * Allow describe\_instances to use tags for searches * Remove race condition (in InstanceTypeProjects) * Optimize resource tracker queries for instances * Move console scripts to entrypoints * Update latest oslo.setup * Remove print statements * Return 409 on creating/importing same name keypair * Add CRUD methods for tags to the EC2 API * Adds Tilera back-end for baremetal * Remove race condition (in InstanceTypes) * Add missing tests for db.instance\_type\_\* methods * set up FakeLogger for root logger * Include Co-authored-by entries in AUTHORS * Sync everything from oslo-incubator * Set version to 2013.2 * Refactor db.service\_destroy and db.service\_update methods * Enable tox use of site-packages for libvirt * Fix db archiving bug with foreign key constraints * Add quotas for fixed ips * Makes safe xml data calls raise 400 http error instead of 500 * Check keypair destroy result operation * Remove sqlalchemy calling back to DB API * Fix: Nova aggregate API throws an uncaught exception on invalid host * Skip deleted fixed ip address for os-fixed-ips extension * Don't load system\_metadata when it isn't joined * Delete instance metadata when delete VM * Refactor work with session in db.block\_device\_mapping\_\* methods * Force resource updates to update updated\_at * Remove instance['instance\_type'] relationship from db api * Extended server attributes can show wrong hypervisor\_hostname * Compile BigInteger to INTEGER for sqlite * Remove uses of instance['instance\_type'] from nova/compute * Make nova-manage db archive\_deleted\_rows more explicit * add .idea folder to .gitignore pycharm creates this folder * Update tox.ini to support RHEL 6.x * Remove parameters containing passwords from Notifications * Standarize ip validation along the code * Rename VMWare to VMware * docs should indicate proper git commit limit * Imporove db.sqlalchemy.api.\_validate\_unique\_server\_name method * Remove unused db calls from nova.db.api * Compute manager should remove dead resources * instance\_info\_cache\_update creates wrongly * don't stack trace if long ints are passed to db * Add instance\_type\_get() to virt api * Remove duplicate options(joinedload) from aggregates db code * Update OpenStack LLC to Foundation * Additional tests for safe parsing with minidom * Retry floating\_ip\_fixed\_ip\_associate on deadlock * Sync nova with oslo DB exception cleanup * Remove unused nova.db.api:instance\_get\_all\_by\_reservation * Migration 146: Execute delete call * Spelling: compatable=>compatible * Move DB thread pooling to DB API * Remove race condition (in Networks) * Move some context checking code from sqlalchemy * Add Nova quantum security group proxy * Fix handling of source\_groups with no-db-compute * Multi-tenancy isolation with aggregates * Fix broken logging imports * Use oslo-config-2013.1b4 * Add a safe\_minidom\_parse\_string function * Retry bw\_usage\_update() on innodb Deadlock * Default SG rules for the Security Group "Default" * create new cidr type for data storage * Remove unused nova.db.api:network\_get\_by\_bridge * Remove unused nova.db.api:network\_get\_by\_instance * Remove unused db calls from nova.db.sqlalchemy.api * Remove unused db calls * Small spelling fix in sqlalchemy utils * Remove race condition (in TaskLog) * Add generic dropper for duplicate rows * Fix typo/bug in generic UC dropper * clean up missing whitespace after ':' * Push 'Error' result from event to instance action * Assign unique names with os-multiple-create * Harmonize PEP8 checking between tox and run\_tests.sh * Allow archiving deleted rows to shadow tables, for performance * Synchronize code from oslo * Canonizes IPv6 before insert it into the db * API extension for accessing instance\_actions * Use joinedload for system\_metadata in db * Module import style checking changes * Check the length of flavor name in "flavor-create" * Update docs about testing * Add generic UC dropper * Fix regression in non-admin simple\_usage:show * Fix inaccuracies in the development environment doc * Update to simplified common oslo version code * Use joined version of db.api calls * Move floating ip db access to calling side * Added the build directory to the tox.ini list pep8 ignores * Fix lazy load 'system\_metadata' failed problem * Remove strcmp\_const\_time * Update .coveragerc * Use oslo database code * Default value of monkey\_patch\_modules is broken * Update HACKING.rst per recent changes * Optimize floating ip list to make one db query * Reimplement is\_valid\_ipv4() * Tweakify is\_valid\_boolstr() * Make system\_metadata update in place * Record instance actions and events * Postgres does not like empty strings for type inet * Fixes 'not in' operator usage * Fixes "is not" usage * Code cleanup for rebuild block device mapping * Nova Hyper-V driver refactoring * Make sure there are no unused import * Refactoring/cleanup of compute and db apis * Allow users to specify a tmp location via config * Add system\_metadata to db.instance\_get\_active\_by\_window\_joined * Enable N302: Import modules only * clean up api\_samples documentation * populate dnsmasq lease db with valid leases * Fix rendering of FixedIpNotFoundForNetworkHost * Fix hacking N302 import only modules * Avoid db lookup in info\_from\_instance() * Fixes task\_log\_get and task\_log\_get\_all signatures * Provide creating real unique constraints for columns * Fix nova coverage * Fix floating ips with external gateway * Add support for Option Groups in LazyPluggable * fix misspellings in logs, comments and tests * Remove restoring soft deleted entries part 2 * Remove restoring soft deleted entries part 1 * Remove some db calls from db servicegroup driver * Cells: Fix for relaying instance info\_cache updates * Ignore auto-generated files by lintstack * Clean up db network db calls for fixed and float * Fix multi line docstring tests in hacking.py * don't allow crs in the code * enforce server\_id can only be uuid or int * Move compute node operations to conductor * correcting for proper use of the word 'an' * Add nova-spicehtml5proxy helper * Clean up get\_instance\_id\_by\_floating\_address * Use testrepository setuptools support * Cells: Add cells API extension * More HostAPI() cleanup for cells * Revert "Use testr setuptools commands." * enable hacking.py self tests * Fix uses of service\_get\_all\_compute\_by\_host * use postgresql INET datatype for storing IPs * Clean up compute API image\_create * Use testr setuptools commands * Fix quota updating when admin deletes common user's instance * Move logic from os-api-host into compute * make runtests -p act more like tox * fix new N402 errors * Move service\_down\_time to nova.service * fix N402 for rest of nova * fix N402 for nova/db * Move osapi\_compute\_unique\_server\_name\_scope to db * Move compute\_topic into nova.compute.rpcapi * Fix N402 for nova/api * New instance\_actions and events table, model, and api * Remove availability\_zones from service table * Enable Aggregate based availability zones * Refresh instance metadata in-place * CLI for bare-metal database sync * Move global glance opts into nova.image.glance * fix N401 errors, stop ignoring all N4\* errors * PXE bare-metal provisioning helper server * Changed 'OpenStack, LLC' message to 'OpenStack Foundation' * Invert test stream capture logic for debugging * Refactor work with TaskLog in sqlalchemy.api * NovaBase.delete() rename to NovaBase.soft\_delete() * Refactor periodic tasks * Timeout individual tests after one minute * Cells: Add the main code * Add helper methods to nova.paths * Move global path opts in nova.paths * Fix bug and remove update lock in db.instance\_test\_and\_set() * Remove unused imports * Database metadata performance optimizations * db.network\_delete\_safe() method performance optimization * db.security\_group\_rule\_destroy() method performance optimization * Database reservations methods performance optimization * Using query.soft\_delete() method insead of soft deleting by hand * Removed unused imports * Enable nova exception format checking in tests * Eliminate race conditions in floating association * Parameterize database connection in test.py * Move baremetal database tests to fixtures * Add .testrepository/ directory to gitginore * Update exceptions to pass correct kwargs * Remove fake\_tests opt from test.py * Move TimeOverride to the general reusable-test-helper place * Add more association support to network API * Replace fixtures.DetailStream with fixtures.StringStream * Use testr to run nova unittests * Add general mechanism for testing api coverage * remove session param from instance\_get * remove session param from instance\_get\_by\_uuid * Move some opts into nova.utils * Properly scope password options * Move monkey patch config opts into nova.utils * Move all temporary files into a single /tmp subdir * Use fixtures library for nova test fixtures * Remove unused bridge interfaces * Order instance faults by created\_at and id * Fix pep8 exclude logic for 1.3.3 * Add agent build API support for list/create/delete/modify agent build * Make policy.json not filesystem location specific * Add pyflakes option to tox * Implements volume usage metering * Fix test suite to use MiniDNS * remove session param from certificate\_get * improve sessions for key\_pair\_(create,destroy) * Include 'hosts' and 'metadetails' in aggregate * Make resize and multi-node work properly together * Provide better error message for aggregate-create * Allow multi\_host compute nodes to share dhcp ip * Add pluggable ServiceGroup monitoring APIs * Add SSL support to utils.generate\_glance\_url() * Truncate large console logs in libvirt * Move global fixture setup into nova/test.py * Add a CONTRIBUTING file * Cells: Re-add DB model and calls * Move sql options to nova.db.sqlalchemy.session * Add missing binary * Pin pep8 to 1.3.3 * Use CONF.import\_opt() for nova.config opts * Remove nova.config.CONF * Add the beginnings of the nova-conductor service * Remove useless function quota\_usage\_create * Compact pre-Grizzly database migrations * Ignore editor backup files * Remove nova.flags * improve session handling around instance\_ methods * add instance\_type\_extra\_specs to instances * Change a toplevel function comment to a docstring * Make ec2\_instance\_create db method consistant across db apis * Adds documentation for Hyper-V testing * Update api\_samples README.rst to use tox * Allow group='foo' in self.flags() for tests * Make sure instance data is always refreshed * Remove gen\_uuid() * Isolate tests from the environment variable http\_proxy * Sync latest code from oslo-incubator * Add DB query to get in-progress migrations * Adds REST API support for Fixed IPs * Added separate bare-metal MySQL DB * Ban db import from nova/virt * Upgrade pylint version to 0.26.0 * Removes fixed\_ip\_get\_network * improve session handling around virtual\_interfaces * improve sessions for reservation * improve session handling around quotas * Remove custom test assertions * Add nova option osapi\_compute\_unique\_server\_name\_scope * Switch from FLAGS to CONF in tests * Updated scheduler and compute for multiple capabilities * Switch from FLAGS to CONF in nova.db * Removed two unused imports * Remove unused functions * Fixes a bug in nova.utils, due to Windows compatibility issues * improve session handling of dnsdomain\_list * Make tox.ini run pep8/hacking checks on bin * clean up dnsdomain\_unregister * Make utils.mkfs() set label when fs=swap * Remove nova-volume DB * Make instance\_system\_metadata load with instance * Remove unused function require\_instance\_exists * Fix warnings found with pyflakes * make utils.mkfs() more general * Cleanup nova.db.sqlalchemy.api import * Use uuidutils.is\_uuid\_like for uuid validation * Switch from FLAGS to CONF in misc modules * Move parse\_args to nova.config * improve sessions around floating\_ip\_get\_by\_address * Improve EC2 describe\_security\_groups performance * Increased MAC address range to reduce conflicts * Move to a more canonicalized output from qemu-img info * Add call to reset quota usage * Remove nose detailed error reporting * improve sessions around compute\_node\_\* * Use testtools as the base testcase class * removes the nova-volume code from nova * remove session parameter from fixed\_ip\_get * Make instance\_get\_all() not require admin context * Fix nova-network MAC collision logic * Make nova-rootwrap optional * Fix hardcoded topic strings with constants * Update common * Remove unused imports in setup.py * Migrate to fileutils and lockutils * Migrate network of an instance * Remove deprecated root\_helper config * Remove is\_admin\_context from sqlalchemy.api * SanISCSIDriver SSH execution fixes * Fix bad Log statement in nova-manage * Move mkfs from libvirt.utils to utils * Add trove classifiers for PyPI * Fix and enable pep8 E502, E712 * read\_deleted snapshot and volume id mappings * Set read\_deleted='yes' for instance\_id\_mappings * Add TestCase.stub\_module to make stubbing modules easier * Update tools hacking for pep8 1.2 and beyond * Remove outdated moduleauthor tags * Add aggregates extension to API samples test * Remove TestCase.assertNotRaises * Revert "Add full test environment." * Fixes error message for flavor-create duplicate ID * Updated code to update attach\_time of a volume while detaching * Deleting security group does not mark rules as deleted * Collect more accurate bandwidth data for XenServer * Clarify dangerous use of exceptions in unit tests * Restore SIGPIPE default action for subprocesses * Fix bugs in resource tracker and cleanup * Properly create and delete Aggregates * No stack trace on bad nova aggregate-\* command * Fix aggregate\_hosts.host migration for sqlite * Fix marker pagination for /servers * Fix doc/README.rst to render properly * make ensure\_default\_security\_group() call sgh * Adds new volume API extensions * Include volume\_metadata with object on vol create * Add man pages * Clean up handling of project\_only in network\_get * Add README for doc folder * Return 400 if create volume snapshot force parameter is invalid * Make ip block splitting a bit more self documenting * Add documentation for scheduler filters scope * Improve floating IP delete speed * Set install\_requires in setup.py * Stop lock decorator from leaving tempdirs in tests * Implement paginate query use marker in nova-api * Fix synchronized decorator path cleanup * Add scope to extra\_specs entries * Speed up creating floating ips * Fixes sqlalchemy.api.compute\_node\_get\_by\_host * Address race condition from concurrent task state update * Clear up the .gitignore file * hacking: Add driver prefix recommendation * External locking for image caching * Do not run pylint by default * Correct utils.execute() to check 0 in check\_exit\_code * Add ops to aggregate\_instance\_extra\_specs filter * Implement project specific flavors API * Move ensure\_tree to utils * Rename class\_name to project\_id * Remove unused permitted\_instance\_types * Add lintstack error checker based on pylint * Adding indexes to frequently joined database columns * Adds integration testing for api samples * Make instance\_update\_and\_get\_original() atomic * Remove unused instance id-to-uuid function * Have compute\_node\_get() join 'service' * Implements sending notification on metadata change * Code clean up * Keep the ComputeNode model updated with usage * Makes sure instance deletion ok with deleted data * OpenStack capitalization added to HACKING.rst * Makes sure tests don't leave lockfiles around * Disable I18N in Nova's test suites * Revert per-user-quotas * Remove unused imports * Fix PEP8 issues * Fix spelling typos * Ignoring \*.sw[op] files * Adds Hyper-V support in nova-compute (with new network\_info model), including unit tests * Allow nova to guess device if not passed to attach * Remove assigned, but unused variables from nova/db/sqlalchemy/api.py * Improve bw\_usage\_update() performance * Update extra specs calls to use deleted: False * Implement network association in OS API * import module, not type * Make sure ec2 mapping raises proper exceptions * Sync some cleanups from openstack.common * Revert "Remove unused add\_network\_to\_project() method" * Uniqueness checks for floating ip addresses * Add a 50 char git title limit test to hacking * General host aggregates part 2 * Update devref for general host aggregates * Move results filtering to db * Improve external locking on Windows * Solve possible race in semaphor creation * Adds per-user-quotas support for more detailed quotas management * Move root\_helper deprecation warning into execute * Flavor extra specs extension use instance\_type id * Simplify file hashing * Remove old exception type * Improve external lock implementation * Fix broken pep8 exclude processing * Update reset\_db to call setup if \_DB is None * Remove unused imports * Fix a comment typo in db api * Fix issue with filtering where a value is unicode * Deprecate root\_helper in favor of rootwrap\_config * Prevent instance\_info\_cache from being altered post instance * Convert virtual\_interfaces to using instance\_uuid * reduce debugging from utils.trycmd() * Add a link from HACKING to wiki GitCommitMessages page * Make compute only auto-confirm its own instances * Don't store system\_metadata in xenstore * Only enforce valid uuids if a uuid is passed * Call correct implementation for quota\_destroy\_all\_by\_project * Convert fixed\_ips to using instance\_uuid * Inject instance metadata into xenstore * Allow floating IP pools to be deleted * Fix a bug in compute\_node\_statistics * Add call to get hypervisor statistics * Fix wrong regex in cleanup\_file\_locks * Remove unused add\_network\_to\_project() method * Updates migration 111 to work w/ Postgres * Remove unnecessary use of with\_lockmode * Fix SQL deadlock in quota reservations * remove unused clauses[] variable * Return 413 status on over-quota in the native API * Use all deps for tools/hacking.py tests in tox * General-host-aggregates part 1 * Exclude openstack-common from pep8 checks * Add SKIP\_WRITE\_GIT\_CHANGELOG to setup.py * Remove deprecated auth-related db code * Remove unused find\_data\_files function in setup.py * Refactor instance\_usage\_audit. Add audit tasklog * Avoid lazy-loading errors on instance\_type * Propagate setup.py change from common * Removed a bunch of cruft files * Update common setup code to latest * Implements updating complete bw usage data * This patch stops metadata from being deleted when an instance is deleted * sort .gitignore for readability * ignore project files for eclipse/pydev * Add \*.egg\* to .gitignore * Finish AUTHORS transition * Modifies ec2/cloud to be able to use Cinder * Expand HACKING with commit message guidelines * Switch to common logging * Run hacking tests as part of the gate * Ability to read deleted system metadata records * Remove passing superfluous read\_deleted argument * Flesh out the README file with a little more useful information * Implement blueprint ec2-id-compatibilty * Add multi-process support for API services * Use setuptools-git plugin for MANIFEST * Add missing nova-novncproxy to tarballs * Rename the instance\_id column in instance\_info\_caches * Add hypervisor information extension * Cleanup of image service code * Fix several PEP-8 issues * Fix db calls for snaphsot and volume mapping * Removes utils.logging\_error (no longer used) * Removes utils.fetch\_file (no longer used) * Improve filter\_scheduler performance * Re-factor instance DB creation * Add full test environment * vm state and task state management * Update pylint/pep8 issues jenkins job link * SM volume driver: DB changes and tests * Imports cleanup * Enforce an instance uuid for instance\_test\_and\_set * Replaces functions in utils.py with openstack/common/timeutils.py * Add CPU arch filter scheduler support * Cleanup instance\_update so it only takes a UUID * Re-add private \_compute\_node\_get call to sql api * Remove unused DB calls * Remove utils.deprecated functions * instance\_destroy now only takes a uuid * Finalize tox config * Convert consoles to use instance uuid * Add zeromq driver. Implements blueprint zeromq-rpc-driver * Prefix all nova binaries with 'nova-' * Migrate security\_group\_instance\_association to use a uuid to refer to instances * Migrate instance\_metadata to use a uuid to refer to instances * Adds \`disabled\` field for instance-types * Revert "blueprint " * Use openstack.common.cfg.CONF * Unused imports cleanup (folsom-2) * blueprint * convert virt drivers to fully dynamic loading * Remove duplicate words in comments * Eliminate a race condition on instance deletes * Backslash continuation removal (Nova folsom-2) * Update .gitignore * Move queue\_get\_for() from db to rpc * Use cfg's new global CONF object * Add attach\_time for EC2 Volumes * fixing issue with db.volume\_update not returning the volume\_ref * Grammar fixes * Grammar / spelling corrections * Run coverage tests via xcover for jenkins * Use utils.utcnow rather than datetime.utcnow * Expose a limited networks API for users * Added a instance state update notification * Update pep8 dependency to v1.1 * Nail pep8 dependencies to 1.0.1 * Add scheduler filter: TypeAffinityFilter * Finish quota refactor * Include volume-usage-audit in tarballs * Use cfg's new behavior of reset() clearing overrides * fixed\_ip\_get\_by\_address read\_deleted from context * Rearchitect quota checking to partially fix bug 938317 * Make use of openstack.common.jsonutils * Alphabetize imports * Adding notifications for volumes * Destroy system metadata when destroying instance * Use ConfigOpts.find\_file() to find paste config * Remove instance Foreign Key in volumes table, replace with instance\_uuid * Remove old flagfile support * Defer image\_ref update to manager on rebuild * Remove instance action logging mechanism * pylint cleanup * Allow sitepackages on jenkins * Replaces exceptions.Error with NovaException * Register fake flags with rpc init function * Generate a Changelog for Nova * Add instance\_system\_metadata modeling * Use ConfigOpts.find\_file() to locate policy.json * Implement key pair quotas * Compact pre-Folsom database migrations * Use save\_and\_reraise\_exception() from common * Convert Volume and Snapshot IDs to use UUID * adjust logging levels for utils.py * Migrate block\_device\_mapping to use instance uuids * Removes RST documentation and moves it to openstack-manuals * Remove workaround for sqlalchemy-migration < 0.6.4 * Use openstack.common.importutils * Keep uuid with bandwidth usage tracking to handle the case where a MAC address could be recycled between instances * Refactor nova.rpc config handling * Improved tools/hacking.py * Scope coverage report generation to nova module * Moves \`usage\_from\_instance\` into nova.compute.utils * Implement security group quotas * Add deleted\_at to instance usage notification * Port types and extra specs to volume api * Renamed current\_audit\_period function to last\_completed\_audit\_period to clarify its purpose * Improve grammar throughout nova * Improved localization testing * Remove nova Direct API * migration\_get\_all\_unconfirmed() now uses lowercase "finished" Fixes bug 977719 * Run tools/hacking.py instead of pep8 mandatory * Delete fixed\_ips when network is deleted * HACKING fixes, sqlalchemy fix * Cleanup xenapi driver logging messages to include instance * Use -1 end-to-end for unlimited quotas * Treat -1 quotas as unlimited * Remove nova.rpc.impl\_carrot * fix TypeError with unstarted threads in nova-network * bug 965335 * Check that volume has no snapshots before deletion * Fix disassociate query to remove foreign keys * Clean up read\_deleted support in host aggregates code * ensure atomic manipulation of libvirt disk images * Reordered the alphabet * Add periodic\_fuzzy\_delay option * Remove unused certificate SQL calls * Assume migrate module missing \_\_version\_\_ is old * Remove tools/nova-debug * Change mycloud.com to example.com (RFC2606) * Clarify HACKING's shadow built-in guidance * Implement quota classes * Fixes bug 949038 * Fixes bug 957708 * Improve performance of generating dhcp leases * Make sqlite in-memory-db usable to unittest * Workaround issue with greenthreads and lockfiles * various cleanups * Remove Virtual Storage Array (VSA) code * db api: Remove check for security groups reference * Remove broken bin/\*spool\* tools * Refix mac change to work around libvirt issue * Use a FixedIp subquery to find networks by host * Make fixed\_ip\_disassociate\_all\_by\_timeout work * Sort results from describe\_instances in EC2 API * doc/source/conf.py: Fix man page building * Nuke some unused SQL api calls * Remove update lockmode from compute\_node\_get\_by\_host * Clean up setup and teardown for dhcp managers * Fix nova-manage backend\_add with sr\_uuid * Add pybasedir and bindir options * Use a high number for our default mac addresses * Remove an obsolete FIXME comment * Fix racey snapshots * setup.py: Fix doc building * Add adjustable offset to audit\_period * Clear created attributes when tearing down tests * HACKING fixes, all but sqlalchemy * Remove trailing whitespaces in regular file * Bug #943178: aggregate extension lacks documentation * No longer ignoring man/novamanage * Fix rst formatting and cross-references * fix restructuredtext formatting in docstrings that show up in the developer guide * Update fixed\_ip\_associate to not use relationships * Only raw string literals should be used with \_() * assertRaises(Exception, ...) considered harmful * update copyright, add version information to footer * Refactor spawn to use UndoManager * Add missing filters for new root commands * Fixes bug 943188 * Use constant time string comparisons for auth * Rename zones table to cells and Instance.zone\_name to cell\_name * Fixes bug 942549 * sm vol driver: Fix regression in sm\_backend\_conf\_update * Add utils.tempdir() context manager for easy temp dirs * Do not hit the network\_api every poll * OS X Support fixed, bug 942352 * Adds temporary chown to sparse\_copy * Fixes cloudpipe extension to work with keystone * Add missing directive to tox.ini * Clean stale lockfiles on service startup : fixes bug 785955 * Add hypervisor\_hostname to compute\_nodes table and use it in XenServer * blueprint host-aggregates: improvements and clean-up * blueprint host-aggregates: xenapi implementation * Support tox-based unittests * Add attaching state for Volumes * Escape apostrophe in utils.xhtml\_escape() (lp#872450) * nova.conf sample tool * Fix traceback running instance-usage-audit * Support non-UTC timestamps in changes-since filter * Allow file logging config * bug 933620: Error during ComputeManager.\_poll\_bandwidth\_usage * Make database downgrade works * Backslash continuations (nova.tests) * Core modifications for future zones service * Fix API extensions documentation, bug 931516 * bw\_usage takes a MAC address now * Prevent Duplicate VLAN IDs * Don't allow EC2 removal of security group in use * Standardize logging delaration and use * Remove the last of the gflags shim layer * Move translations to babel locations * Get rid of distutils.extra * Use named logger when available * Removes constraints from instance and volume types * Backslash continuations (misc.) * Update cfg from openstack-common * Remove ajaxterm from Nova * Fix support for --flagfile argument * Fix \_poll\_bandwidth\_usage if no network on vif * Backslash continuations (nova.db) * Allows nova to read files as root * Re-run nova-manage under sudo if unable to read conffile * Move cfg to nova.openstack.common * blueprint nova-image-cache-management phase1 * Fix disassociation of fixed IPs when using FlatManager * Optionally disable file locking * Avoid weird test error when mox is missing * Add support for pluggable l3 backends * lockfile.FileLock already appends .lock * Ties quantum, melange, and nova network model * Fix VPN ping packet length * Remove utils.runthis() * Refactor away the flags.DEFINE\_\* helpers * Remove session arg from sm\_backend\_conf\_update * Remove session arguments from db.api * blueprint host-aggregates: OSAPI extensions * blueprint host-aggregates: OSAPI/virt integration, via nova.compute.api * Fixes bug 921265 - i'nova-manage flavor create|list' * Blueprint xenapi-provider-firewall and Bug #915403 * Create nova cert worker for x509 support * usage: Fix time filtering * Add an API extension for creating+deleting flavors * Abstract out \_exact\_match\_filter() * Adds a bandwidth filter DB call * KVM and XEN Disk Management Parity * ComputeNode Capacity support * Change the logic for deleting a record dns\_domains * Fix nova-manage floating list (fixes bug 918804) * scheduler host\_manager needs service for filters * Add dns domain manipulation to nova * Implements blueprint vnc-console-cleanup * blueprint host-aggregates * Add missing scripts to setup.py (lp#917676) * Fixes bug 917128 * Implement BP untie-nova-network-models * Remove a whole bunch of unused imports * Implements blueprint separate-nova-volumeapi * Call to instance\_info\_cache\_delete to use uuid * Add @utils.deprecated() * Adds support for floating ip pools * Refactors utils.load\_cached\_file * Workaround bug 852095 without importing mox * Bug #912858: test\_authors\_up\_to\_date does not deal with capitalized names properly * Adds workaround check for mox in to\_primitive * preload cache table and keep it up to date * Update HACKING.rst * Remove install\_requires processing * PEP8 type comparison cleanup * Adds running\_deleted\_instance\_reaper task * Unused db.api cleanup * PEP8 remove direct type comparisons * Clean up pylint errors in top-level files * Ensure generated passwords meet minimum complexity * 'except:' to 'except Exception:' as per HACKING * Fixes LP bug #907898 * Bug#898257 abstract out disk image access methods * Make UUID format checking more correct * Document return type from utils.execute() * Makes disassociate by timeout work with multi-host * Adds missing joinedload for vif loading * Starting work on exposing service functionality * Fixes bug 723235 * Expose Asynchronous Fault entity in the OSAPI * Update utils.execute so that check\_exit\_code handles booleans. Fixes LP bug #904560 * Fixes bug 887402 * Bug 902626 * Renaming instance\_actions.instance\_id column to instance\_uuid. blueprint: internal-uuids * Moves find config to utils because it is useful * fixed\_ips by vif does not raise * Add preparation for asynchronous instance faults * Log it when we get a lock * Adds network model and network info cache * Rename .nova-venv to .venv * Add ability to see deleted and active records * A more secure root-wrapper alternative * First steps towards consolidating testing infrastructure * Remove remnants of babel i18n infrastructure * Remove autogenerated pot file * remove duplicate netaddr in nova/utils * Adds extension documentation for some but not all extensions * Remove VIF<->Network FK dependancy * split rxtx\_factor into network and instance\_type * Fix for bug 887712 * Fix RPC responses to allow None response correctly * removed logic of throwing exception if no floating ip * Adding an install\_requires to the setup call. Now you can pip install nova on a naked machine * Removing obsolete bzr-related clauses in setup.py * Updating {add,remove}\_security\_group in compute.api to use instance uuids instead of instance ids. blueprint internal-uuids * Converted README to RST format * Follow hostname RFCs * Remove contrib/nova.sh and other stale docs * Separate metadata api into its own service * Log the URL to an image\_ref and not just the ID * Verify security group parameters * More spelling fixes inside of nova * Refactor logging\_error into utils * Add DHCP support to the QuantumManager and break apart dhcp/gateway * Added some documentation to db.api module docstring * Xen Storage Manager Volume Driver * flatten distributed scheduler * Creating uuid -> id mapping for S3 Image Service * Fixes lp883279 * Log original dropped exception when a new exception occurs * Fix lp:861160 -- newly created network has no uuid * Adding bulk create fixed ips. The true issue here is the creation of IPs in the DB that are not currently used(we are building the entire block). This fix is just a bandaid, but it does cut ~25 seconds off of the quantum tests on my laptop * Revert how APIs get IP address info for instances * Replaces all references to nova.db.api with nova.db * Add .gitreview config file for gerrit * Convert instancetype.flavorid to string * Improve the liveness checking for services * Repartition and resize disk when marked as managed * Remove dead DB API call * Remove unused flag\_overrides from TestCase * Xenapi driver can now generate swap from instance\_type * Adds the ability to automatically issue a hard reboot to instances that have been stuck in a 'rebooting' state for longer than a specified window * Adds more usage data to Nova's usage notifications * Remove AoE, Clean up volume code * Include original exception in ClassNotFound exception * Allow db schema downgrades * moved floating ip db access and sanity checking from network api into network manager added floating ip get by fixed address added fixed\_ip\_get moved floating ip testing from osapi into the network tests where they belong * Adding run\_test.sh artifacts to .gitignore * Fix the grantee group loading for source groups * Fix some minor issues due to premature merge of original code * \* Rework osapi to use network API not FK backref \* Fixes lp854585 * This patch adds flavor filtering, specifically the ability to flavor on minRam, minDisk, or both, per the 1.1 OSAPI spec * Add next links for server lists in OSAPI 1.1. This adds servers\_links to the json responses, and an extra atom:link element to the servers node in the xml response * Merging trunk * Adding OSAPI tests for flavor filtering * This patch adds instance progress which is used by the OpenStack API to indicate how far along the current executing action is (BUILD/REBUILD, MIGRATION/RESIZE) * Merging trunk * Fixes lp:855115 -- issue with disassociating floating ips * Renumbering instance progress migration * Fixing tests * Keystone support in Nova across Zones * trunk merge fixup * Adds an 'alternate' link to image views per 3.10 and 3.11 of http://docs.openstack.org/cactus/openstack-compute/developer/openstack-compute-api-1.1/content/LinksReferences.html * Merging trunk * Adding flavor filtering * Instance deletions in Openstack are immediate. This can cause data to be lost accidentally * Makes sure ips are moved on the bridge for nodes running dnsmasq so that the gateway ip is always first * clean up based on cerberus review * Remove keystone middlewares * Merged trunk * merged trunk * floating ip could have no project and we should allow access * actions on floating IPs in other projects for non-admins should not be allowed * floating\_ip\_get\_by\_address should check user's project\_id * Pep8 fixes * Merging trunk * Refactoring instance\_type\_get\_all * merge trunk, fix conflicts * Fixed unit tests with some minor refactoring * merge from trunk * convert images that are not 'raw' to 'raw' during caching to node * Add iptables filter rules for dnsmasq (lp:844935) * merge with trunk r1601 * merged with trunk * Reverted some changes to instance\_get\_all\_by\_filters() that was added in rev 1594. An additional argument for filtering on instance uuids is not needed, as you can add 'uuid: uuid\_list' into the filters dictionary. Just needed to add 'uuid' as an exact\_match\_filter. This restores the filtering to do a single DB query * merged trunk and resolved conflict * Removed the extra code added to support filtering instances by instance uuids. Instead, added 'uuid' to the list of exact\_filter\_match names. Updated the caller to add 'uuid: uuid\_list' to the filters dictionary, instead of passing it in as another argument. Updated the ID to UUID mapping code to return a dictionary, which allows the caller to be more efficient... It removes an extra loop there. A couple of typo fixes * Adds the ability to automatically confirm resizes after the \`resize\_confirm\_window\` (0/disabled by default) * PEP8 cleanup * \* Remove the foreign key and backrefs tying vif<->instance \* Update instance filtering to pass ip related filters to the network manager \* move/update tests * Merging trunk * merge with trunk * Corrected the status in DB call * Merged trunk * remove unused import * merge the sknurt * remove the polymorph * Fixes the handling of snapshotting in libvirt driver to actually use the proper image type instead of using raw for everything. Also cleans up an unneeded flag. Based on doude's initial work * merge with trunk * Some Linux systems can also be slow to start the guest agent. This branch extends the windows agent timeout to apply to all systems * Fix a bug that would make spawning new instances fail if no port/protocol is given (for rules granting access for other security groups) * Merging trunk * Authorize to start a LXC instance withour, key, network file to inject or metadata * Update the v1.0 rescue admin action and the v1.1 rescue extension to generate 'adminPass'. Fixes an issue where rescue commands were broken on XenServer. lp#838518 * merge the trunks * Fixes libvirt rescue to use the same strategy as xen. Use a new copy of the base image as the rescue image. It leaves the original rescue image flags in, so a hand picked rescue image can still be used if desired * merge the trunks * Merged trunk * I am using iputils-arping package to send arping command. You will need to install this package on the network nodes using apt-get command apt-get install iputils-arping * Removed sudo from the arguments * merge from trunk * Make sure grantee\_group is eagerly loaded * Merged trunk * trunk merge * it merges the trunk; or else it gets the conflicts again * This makes the OS api extension for booting from volumes work. The \_get\_view\_builder method was replaced in the parent class, but the BootFromVolume controller was not updated to use the new method * Merged trunk * Adding flavor extra data extension * revert last change * build the query with the query builder * update db api for split filterings searches * Merged from trunk and resolved conflicts * Merged trunk * The 1.1 API specifies that two vendor content types are allowed in addition to the standard JSON and XML content types * Adding progress * 0 for the instance id is False ;) * merge trunk * fix up the filtering so it does not return duplicates if both the network and the db filters match * Fix issue where floating ips don't get recreated when a network host reboots * Initial pass at automatically confirming resizes after a given window * merge trunks * get all the vifs * get all the vifs * resolve conflicts / merge with trunk revno 1569 * Fixes an issue where 'invalid literal for int' would occur when listing images after making a v1.1 server snapshot (with a UUID) * merge the trunk * remove the vif joins, some dead code, and the ability to take in some instances for filtering * allow passing in of instances already * trunk merge * makes sure floating addresses are associated with host on associate so they come back * This branch changes XML Serializers and their tests to use lxml.etree instead of minidom * - remove translation of non-recognized attributes to user metadata, now just ignored - ensure all keys are defined in image dictionaries, defaulting to None if glance client doesn't provide one - remove BaseImageService - reorganize some GlanceImageService tests * we're back * PEP8 cleanups * working on getting tests back * merging trunk; resolving conflicts * pep8 fixes in nova/db/sqlalchemy/api.py and nova/virt/disk.py * pep8 fixes * merging trunk; resolving conflicts * Some arches dont have dmidecode, check to see if libvirt is capable of running rather getInfo of the arch its running on * fixups * parent merge * bug fixes * merging trunk * trunk merge * When vpn=true in allocate ip, it attempts to allocate the ip that is reserved in the network. Unfortunately fixed\_ip\_associate attempts to ignore reserved ips. This fix allows to filter reserved ip address only when vpn=True * Stock zones follows a fill-first methodology—the current zone is filled with instances before other zones are considered. This adds a flag to nova to select a spread-first methodology. The implementation is simply adding a random.shuffle() prior to sorting the list of potential compute hosts by weights * Pass reboot\_type (either HARD or SOFT) to the virt layers from the API * merging trunk * pull-up from trunk; move spread\_first into base\_scheduler.py * trunk merge * Merged trunk * merged rbp * adds a fake\_network module to tests to generate sensible network info for tests. It does not require using the db * Adding a can\_read\_deleted filter back to db.api.instance\_get\_all\_by\_filters that was removed in a recent merge * Merged trunk * child zone queries working with keystone now * Added docstring to explain usage of reserved keyword argument * One more bug fix to make zones work in trunk. Basic problem is that in novaclient using the 1.0 OSAPI, servers.create() takes an ipgroups argument, but when using the 1.1 OSAPI, it doesn't, which means booting instances in child zones won't work with OSAPI v1.0. This fix works around that by using keyword arguments for all the arguments after the flavor, and dropping the unused ipgroups argument * Fixes the reroute\_compute decorator in the scheduler API so that it properly: * Fix lp:844155 * Changing a behavior of update\_dhcp() to write out dhcp options file. This option file make dnsmasq offer a default gateway to only NICs of VM belonging to a network that the first NIC of VM belongs to. So, first NIC of VM must be connected to a network that a correct default gateway exists in. By means of this, VM will not get incorrect default gateways * merged trunk * merging trunk * merging trunk * merged trunk * Make weigh\_hosts() return a host per instance, instead of just a list of hosts * converting fix to just address ec2; updating test * Merged trunk * pull-up from trunk * pull-up from trunk * pull-up from trunk * adding can\_read\_deleted back to db api * This code contains contains a new NetworkManager class that can leverage Quantum + Melange * merge trunk * pull-up from trunk * Fixes a case where if a VIF is returned with a NULL network it might not be able to be deleted. Added test case for that fix * Merged trunk * merged trunk * An AMI image without ramdisk image should start * At present, the os servers.detail api does not return server.user\_id or server.tenant\_id. This is problematic, since the servers.detail api defaults to returning all servers for all users of a tenant, which makes it impossible to tell which user is associated with which server * merged trunk * trunk merge * revert codes for db * correct a method to collect instances from db add interface data to test * meeging trunk * format for pep8 * implement unit test for linux\_net * Fix bug #835919 that output a option file for dnsmasq not to offer a default gateway on second vif * Merged trunk * Added list of security groups to the newly added extension (Createserverext) for the Create Server and Get Server detail responses * feedback from jk0's review, including removing a lot of spaces from docstrings * merged trunk * Fix for LP Bug #839269 * Fixes a small bug which causes filters to not work at all. Also reworks a bit of exception handling to allow the exception related to the bug to propagate up * Fixed review comments * pull-up from trunk * Merged trunk * Glance can now perform its own authentication/authorization checks when we're using keystone * Resolved conflicts and fixed pep8 errors * trunk merge * pull-up from trunk * - implements changes-since for servers resource - default sort is now created\_at desc for instances * merging trunk * Accept keypair when you launch a new server. These properties would be stored along with the other server properties in the database (like they are currently for ec2 api) * merge trunk, fix tests * merge trunk * Simple usage extension for nova. Uses db to calculate tenant\_usage for specified time periods * Fix for LP Bug #838251 * merged trunk * Fixed and improved the way instance "states" are set. Instead of relying on solely the power\_state of a VM, there are now explicitly defined VM states and VM task states which respectively define the current state of the VM and the task which is currently being performed by the VM * Implements lp:798876 which is 'switch carrot to kombu'. Leaves carrot as the default for now... decision will be made later to switch the default to kombu after further testing. There's a lot of code duplication between carrot and kombu, but I left it that way in preparation for ripping carrot out later and to keep minimal changes to carrot * trunk merge * merged trunk * Removed extraneous import and s/vm\_state.STOP/vm\_states.STOPPED/ * make two functions instead of fast flag and add compute api commands instead of hitting db directly * merged trunk * changing default sort to created\_at * supporting changes-since * merged trunk * Merged trunk * Adds assertIn and assertNotIn support to TestCase for compatibility with python 2.6 This is a very minimal addition which doesn't require unittest2 * support the extra optional arguments for msg to assertIn and assertNotIn * fix for assertIn and assertNotIn use which was added in python 2.7. this makes things work on 2.6 still * merge trunk * restore fixed\_ip\_associate\_pool in nova/db/sqlalchemy.py to its original form before this branch. Figured out how to make unit tests pass without requiring that this function changes * use 'uuid' field in networks table rather than 'bridge'. Specify project\_id when creating instance in unit test * Virtual Storage Array (VSA) feature. - new Virtual Storage Array (VSA) objects / OS API extensions / APIs / CLIs - new schedulers for selecting nodes with particular volume capabilities - new special volume driver - report volume capabilities - some fixes for volume types * use db layer for aggregation * merged trunk * merge trunk * merged with rev.1499 * VSA code redesign. Drive types completely replaced by Volume types * merged trunk * review feedback * Merged trunk * Added: - volume metadata - volume types - volume types extra\_specs * merged trunk * Merged trunk * Once a network is associated with project, I can’t delete this network with ‘nova-manage network delete’. As you know, I can delete network by scrubbing the project with ‘nova-manage project scrub’. However it is too much. The cause of this problem is there is no modify command of network attribute * merged with volume types (based on rev.1490). no code rework yet * merged with volume\_types. no code refactoring yet * merged with nova 1490 * added new tables to list of DBs in migration.py * merged trunk * added Openstack APIs for volume types & extradata * Merged from trunk * The notifiers API was changed to take a list of notifiers. Some people might want to use more than one notifier so hopefully this will be accepted into trunk * merge trunk, fix tests * pep8 compliant * merged with rev.1485 * Merged trunk * added volume metadata APIs (OS & volume layers), search volume by metadata & other * Merged from trunk * Stub out the DB in unit test. Fix 'nova-manage network modify' to use db.network\_update() * Merged from upstream * I added notifications decorator for each API call using monkey\_patching. By this merge, users can get API call notification from any modules * Fixes bug that causes 400 status code when an instance wasn't attached to a network * Merged from upstream * merging trunk * Removed blank line * Merged with trunk * Fixed typo and docstring and example class name * Merged trunk * This branch does the final tear out of AuthManager from the main code. The NoAuth middlewares (active by default) allow a user to specify any user and project id through headers (os\_api) or access key (ec2\_api) * pulling all qmanager changes into a branch based on trunk, as they were previously stacked on top of melange * merge trunk, resolve conflicts, fix tests * Our goal is to add optional parameter to the Create server OS 1.0 and 1.1 API to achieve following objectives:- * Fixes bug 831627 where nova-manage does not exit when given a non-existent network address * initial cut on volume type APIs * Merged from trunk,resolved conflicts and fixed broken unit tests due to changes in the extensions which now include ProjectMapper * Fixed conflict with branch * merged trunk * Added Test Code, doc string, and fixed pip-requiresw * Merged trunk * Merged from upstream * merged trunk * implemented tenant ids to be included in request uris * Upstream merge * Fix pep8 * delete debug code * Use 'vm\_state' instead of 'state' in instance filters query * Add 'nova-manage network modify' command * Merged trunk * merge with trunk * Adds accessIPv4 and accessIPv6 to servers requests and responses as per the current spec * Fixes utils.to\_primitive (again) to handle modules, builtins and whatever other crap might be hiding in an object * Added OS APIs to associate/disassociate security groups to/from instances * Merged from trunk * Assorted fixes to os-floating-ips to make it play nicely with an in-progress novaclient implementation, as well as some changes to make it more consistent with other os rest apis. Changes include: * Merged trunk * Merged from trunk and fixed review comments * Fixed review comments * Fixed typo * merged trunk * Merged with trunkw * merge from trunk * Added monkey patching notification code function w * Next round of prep for keystone integration * Merged from trunk * Fixes primitive with builtins, modules, etc * merged trunk * merge with trunk * Added uuid column in virtual\_interfaces table, and an OpenStack extension API for virtual interfaces to expose these IDs. Also set this UUID as one of the external IDs in the OVS vif driver * merge * Merged trunk * merged trunk * Currently, rescue/unrescue is only available over the admin API. Non-admin tenants also need to be able to access this functionality. This patch adds rescue functionality over an API extension * Makes all of the binary services launch using the same strategy.  \* Removes helper methods from utils for loading flags and logging  \* Changes service.serve to use Launcher  \* Changes service.wait to actually wait for all the services to exit  \* Changes nova-api to explicitly load flags and logging and use service.serve \* Fixes the annoying IOError when /etc/nova/nova.conf doesn't exist * merged trunk * added volume metadata. Fixed test\_volume\_types\_extra\_specs * merge trunk * Fixes lp828207 * Fixed bug in which DescribeInstances was returning deleted instances. Added tests for pertinent api methods * Accept binary user\_data in radix-64 format when you launch a new server using OSAPI. This user\_data would be stored along with the other server properties in the database. Once the VM instance boots you can query for the user-data to do any custom installation of applications/servers or do some specific job like setting up networking route table * added unittests for volume\_extra\_data * Change the call name * merged trunk * Merged with trunk * first cut on types & extra-data (only DB work, no tests) * merge from trunk * Updated a number of items to pave the way for new states * Merged trunk * Fixed several logical errors in the scheduling process. Renamed the 'ZoneAwareScheduler' to 'AbstractScheduler', since the zone-specific designation is no longer relevant. Created a BaseScheduler class that has basic filter\_hosts() and weigh\_hosts() capabilities. Moved the filters out of one large file and into a 'filters' subdirectory of nova/scheduler * Merged trunk * merged trunk * Merged with trunk and fixed broken testcases * merged with nova-1450 * Make all services use the same launching strategy * Merged trunk * Pep8 fixes * Split set state into vm, task, and power state functions * merge from trunk * Merged trunk * merge trunk * Resolved conflicts and merged with trunk * Added uuid for networks and made changes to the Create server API format to accept network as uuid instead of id * I'm taking Thierry at his word that I should merge early and merge often :) * Allow local\_gb size to be 0. libvirt uses local\_gb as a secondary drive, but XenServer uses it as the root partition's size. Now we support both * Merged trunk * merge from trunk * Use netaddr's subnet features to calculate subnets * \* Added search instance by metadata. \* instance\_get\_all\_by\_filters should filter deleted * This branch implements a nova api extension which allows you to manage and update tenant/project quotas * making get project quotas require context which has access to the project/tenant) * merge from trunk * Updated the EC2 metadata controller so that it returns the correct value for instance-type metadata * merge the trunk * Merged with upstream * merge with trunk * Validate the size of VHD files in OVF containers * Merged trunk * Merged trunk * Merged trunk * merge trunk * Adding kvm-block-migration feature * merge trunk, remove \_validate\_cidrs and replace functionality with a double for loop * fix bug which DescribeInstances in EC2 api was returning deleted instances * Merged with trunk * Merged trunk * Add durable flag for rabbit queues * merged trunk * Merged trunk * Added ability too boot VM from install ISO. System detects an image of type iso. Images is streamed to a VDI and mounted to the VM. Blank disk allocated to VM based on instance type * Add source-group filtering * added logic to make the creation of networks (IPv4 only) validation a bit smarter: - detects if the cidr is already in use - detects if any existing smaller networks are within the range of requested cidr(s) - detects if splitting a supernet into # of num\_networks && network\_size will fit - detects if requested cidr(s) are within range of already existing supernet (larger cidr) * Fix v1.1 /servers/ PUT request to match API documentation by returning 200 code and the server data in the body * have NetworkManager generate MAC address and pass it to the driver for plugging. Sets the stage for being able to do duplicate checks on those MACs as well * merge trunk, fix conflict frim dprince's branch to remove hostname from bin/nova-dhcpbridge * merge in trunk, resolving conflicts with ttx's branch to switch from using sudo to run\_as\_root=True * remerge trunk * Added add securitygroup to instance and remove securitygroup from instance functionality * Merged with trunk and fixed broken unit testcases * merged rev1418 and fixed code so that less than 1G image can be migrated * merge from trunk * merge from trunk * Merged trunk * Allows for a tunable number of SQL connections to be maintained between services and the SQL server using new configuration flags. Only applies when using the MySQLdb dialect in SQLAlchemy * Merged trunk * Merged trunk * API needs virtual\_interfaces.instance joined when pulling instances from the DB. Updated instance\_get\_all() to match instance\_get\_all\_by\_filters() even though the former is only used by nova-manage now. (The latter is used by the API) * join virtual\_interfaces.instance for DB queries for instances. updates instance\_get\_all to match instance\_get\_all\_by\_filters * merged trunk * Merged with trunk * Support for management of security groups in OS API as a new extension * Merged with trunk * merge from trunk * merged with 1416 * moved vsa\_id to metadata. Added search my meta * Added search instance by metadata. get\_all\_by\_filters should filter deleted * merged trunk * \* Removes rogue direct usage of subprocess module by proper utils.execute calls \* Adds a run\_as\_root parameter to utils.execute, that prefixes your command with FLAG.root\_helper (which defaults to 'sudo') \* Turns all sudo calls into run\_as\_root=True calls \* Update fakes accordingly \* Replaces usage of "sudo -E" and "addl\_env" parameter into passing environment in the command (allows it to be compatible with alternative sudo\_helpers) \* Additionally, forces close\_fds=True on all utils.execute calls, since it's a more secure default * Fixed broken unit testcases * merge from trunk * tenant\_id -> project\_id * These fixes are the result of trolling the pylint violations here * Pass py\_modules=[] to setup to avoid installing run\_tests.py as a top-level module * Pass py\_modules=[] to setup to avoid installing run\_tests.py as a top-level module * merge trunk * Dropped vsa\_id from instances * Merged with trunk * use correct variable name * Merged with trunk * merge from trunk * merged with nova-1411 * This adds the servers search capabilities defined in the OS API v1.1 spec.. and more for admins * merged trunk * Update the OSAPI v1.1 server 'createImage' and 'createBackup' actions to limit the number of image metadata items based on the configured quota.allowed\_metadata\_items that is set * Instance metadata now functionally works (completely to spec) through OSAPI * updating tests; fixing create output; review fixes * Rename sudo\_helper FLAG into root\_helper * Initial validation for ec2 security groups name * Command args can be a tuple, convert them to list * Fix usage of sudo -E and addl\_env in dnsmasq/radvd calls, remove addl\_env support, fix fake\_execute allowed kwargs * Use close\_fds by default since it's good for you * Fix ajaxterm's use of shell=True, prevent vmops.py from running its own version of utils.execute * With this branch, boot-from-volume can be marked as completed in some sense. The remaining is minor if any and will be addressed as bug fixes * Added xml schema validation for extensions resources. Added corresponding xml schemas. Added lxml dep, which is needed for doing xml schema validation * Fixing a bug in nova.utils.novadir() * Adds the ability to read/write to a local xenhost config. No changes to the nova codebase; this will be used only by admin tools that have yet to be created * making server metadata work functionally * cleaning up instance metadata api code * Merged trunk * remove obsolete script from setup.py * Resolve conflicts and fixed broken unit testcases * This branch adds additional capability to the hosts API extension. The new options allow an admin to reboot or shutdown a host. I also added code to hide this extension if the --allow-admin-api is False, as regular users should have no access to host API calls * Adds OS API 1.1 support * another trunk merge * Merged trunk * Merged trunk * merged with 1383 * Updated with code changes on LP * Merged trunk * Sync trunk * Sync trunk * Added possibility to mark fixed ip like reserved and unreserved * Refactored code to reduce lines of code and changed method signature * Allow actions queries by UUID and PEP8 fixes * Allow actions queries by UUID and PEP8 fixes * Implemented @test.skip\_unless and @test.skip\_if functionality in nova/test.py * merged with 1382 * Updates v1.1 servers/id/action requests to comply with the 1.1 spec * merging trunk * pep8 violations sneaking into trunk? * trunk merge * Add run\_as\_root parameter to utils.execute, uses new sudo\_helper FLAG to prefix command * Remove spurious direct use of subprocess * Trunk contained PEP8 errors. Fixed * Trunk merge * merged trunk * merged with nova trunk * utilized functools.wraps * tests and merge with trunk * merged trunk * For nova-manage network create cmd, added warning when size of subnet(s) being created are larger than FLAG.network\_size, in attempt to alleviate confusion. For example, currently when 'nova-manage network create foo 192.168.0.0/16', the result is that it creates a 192.168.0.0/24 instead without any indication to why * Remove instances of the "diaper pattern" * There was a recent change to how we should flip FLAGS in tests, but not all tests were fixed. This covers the rest of them. I also added a method to test.UnitTest so that FLAGS.verbose can be set. This removes the need for flags to be imported from a lot of tests * Merged in the power action changes * Fixed rescue/unrescue since the swap changes landed in trunk. Minor refactoring (renaming callback to \_callback since it's not used here) * another merge * Merged trunk * Merged trunk * Added xenhost config get/setting * remove storing original flags verbosity * remove set\_flags\_verbosity.. it's not needed * Merged trunk * Update the OS API servers metadata resource to match the current v1.1 specification - move /servers//meta to /servers//metadata - add PUT /servers//metadata * merged trunk * Sync with latest tests * Moves code restarting instances after compute node reboot from libvirt driver to compute manager; makes start\_guests\_on\_host\_boot flag global * Moved server actions tests to their own test file. Updated stubbing and how flags are set to be in line with how they're supposed to be set in tests * merging trunk * Nova uses instance\_type\_id and flavor\_id interchangeably when they almost always different values. This can often lead to an instance changing instance\_type during migration because the values passed around internally are wrong. This branch changes nova to use instance\_type\_id internally and flavor\_id in the API. This will hopefully avoid confusion in the future * Conditionals were not actually runing the tests when they were supposed to. Renamed example testcases * Remove instances of the "diaper pattern" * Initial version * switch FLAGS.\* = in tests to self.flags(...) remove unused cases of FLAGS from tests modified test.TestCase's flags() to allow multiple overrides added missing license to test\_rpc\_amqp.py * more cleanup of API tests regarding FLAGS * Merged trunk * Merged trunk * Merged trunk and fixed conflicts to make tests pass * Yet another conflict resolved * merged from trunk * merged from trunk * merge trunk * Resolved pep8 errors * merging trunk * Merged trunk * Fixes lp819523 * Fix for bug #798298 * Merged trunk * Add support for 300 Multiple Choice responses when no version identifier is used in the URI (or no version header is present) * Merged trunk * Glance has been updated for integration with keystone. That means that nova needs to forward the user's credentials (the auth token) when it uses the glance API. This patch, combined with a forth-coming patch for nova\_auth\_token.py in keystone, establishes that for nova itself and for xenapi; other hypervisors will need to set up the appropriate hooks for their use of glance * Added changes from mini server * fix failing tests * merge from trunk * merge the trunk * Merged trunk * merged trunk * Merged trunk * Merged from lab * fix pylint errors * merge from trunk * Moves image creation from POST /images to POST /servers//action * - Remove Twisted dependency from pip-requires - Remove Twisted patch from tools/install\_venv.py - Remove eventlet patch from tools/install\_venv.py - Remove tools/eventlet-patch - Remove nova/twistd.py - Remove nova/tests/test\_twistd.py - Remove bin/nova-instancemonitor - Remove nova/compute/monitor.py - Add xattr to pip-requires until glance setup.py installs it correctly - Remove references to removed files from docs/translations/code * Merged trunk * pull-up from trunk/fix merge conflict * pull-up from trunk * Removing the xenapi\_image\_service flag in favor of image\_service * Merged trunk * removing compute monitor * merge from trunk * While we currently trap JSON encoding exceptions and bail out, for error notification it's more important that \*some\* form of the message gets out. So, we take complex notification payloads and convert them to something we know can be expressed in JSON * Better error handling for resizing * merged trunk rev1348 * merged with nova trunk * Added @test.skip\_unless and @test.skip\_if functionality. Also created nova/tests/test\_skip\_examples.py to show the skip cases usage * merge trunk, resolve conflict in net/manater.py in favor of vif-plug * initial commit of vif-plugging for network-service interfaces * Merged trunk * merged from trunk * merge with trunk, resolve conflicts * merge from trunk * Resync to trunk * merging * FlavorNotFound already existed, no need to create another exception * You see what happens Danny when you forget to close the parenthesis * Merged with trunk * Merged trunk * allow getting by the cidr\_v6 * merging trunk * pull-up from trunk and conflict resolution * merge trunk * Round 1 of changes for keystone integration. \* Modified request context to allow it to hold all of the relevant data from the auth component. \* Pulled out access to AuthManager from as many places as possible \* Massive cleanup of unit tests \* Made the openstack api fakes use fake Authentication by default * pull-up from trunk * Fix various errors discovered by pylint and pyflakes * merged trunk * This change creates a minimalist API abstraction for the nova/rpc.py code so that it's possible to use other queue mechanisms besides Rabbit and/or AMQP, and even use other drivers for AMQP rather than Rabbit. The change is intended to give the least amount of interference with the rest of the code, fixes several bugs in the tests, and works with the current branch. I also have a small demo driver+server for using 0MQ which I'll submit after this patch is merged * made the whole instance handling thing optional * pull-up from trunk; fix problem obscuring context module with context param; fix conflicts and no-longer-skipped tests * --Stolen from https://code.launchpad.net/~cerberus/nova/lp809909/+merge/68602 * Use the util.import\_object to import a module * merged trunk and fix time call * merge trunk * merged trunk * added instance support to to\_primitive and tests * merge with trunk * Adds XML serialization for servers responses that match the current v1.1 spec * merging trunk * Use utils.utcnow. Use True instead of literal 1 * merge trunk * Updated deserialization of POST /servers in the OSAPI to match the latest v1.1 spec * pull-up from trunk * merge trunk * merge from trunk * merge to trunk * some minor cosmetic work. addressed some dead code section * merged with nova-1336 * merged trunk * fix undefined variable errors * fix call to nonexistant method to\_global\_ipv6. Add myself to authors file * updates handling of arguments in nova-manage network create. updates a few of the arguments to nova-manage and related help. updates nova-manage to raise proper exceptions * Fail silently * Fixed conflict * Merged with trunk and fixed broken unit test cases * merged trunk * merge from trunk * pull-up from trunk * Makes security group rules with the newer version of the ec2 api and correctly supports boto 2.0 * merging parent branch servers-xml-serialization * merged recent trunk * merge with trunk * Resolved conflicts with trunk * Implements a simplified messaging abstraction with the least amount of impact to the code base * merging parent branch lp:~rackspace-titan/nova/osapi-create-server * VSA volume creation/deletion changes * Updates to the compute API and manager so that rebuild, reboot, snapshots, and password resets work with the most recent versions of novaclient * merging trunk; resolving conflicts * queries in the models.Instance context need to reference the table by name (fixed\_ips) however queries in the models.FloatingIp context alias the tables out properly and return the data as fixed\_ip (which is why you need to reference it by fixed\_ip in that context) * merged from trunk * merged trunk * merging trunk * pull-up from trunk * Updates /servers requests to follow the v1.1 spec. Except for implementation of uuids replacing ids and access ips both of which are not yet implemented. Also, does not include serialized xml responses * merged trunk * merge from trunk * merged trunk * I'm sorry, for my fail with rebasing. Any way previous branch grew to many other futures, so I supersede it. 1. Used optparse for parsing arg string 2. Added decorator for describe method params 3. Added option for assigning network to certain project. 4. Added field to "network list" for showing which project owns network * Moved the VIF network connectivity logic('ensure\_bridge' and 'ensure\_vlan\_bridge') from the network managers to the virt layer. In addition, VIF driver class is added to allow customized VIF configurations for various types of VIFs and underlying network technologies * merge with trunk, resolve conflicts * fixing merge conflict * merge from trunk * merged with 1320 * some cleanup. VSA flag status changes. returned some files * merged trunk * This fixes the xml serialization of the /extensions and /extensions/foo resources. Add an ExtensionsXMLSerializer class and corresponding unit tests * merge with trunk, resolve conflicts * db/api: fix network\_get\_by\_cidr() * db/api: block\_device\_mapping\_update\_or\_create() * Merged with 1306 + fix for dns change * merge with 1305 * Adds ability to set DNS entries on network create. Also allows 2 dns servers per network to be specified * Reverted volume driver part * pass in the right argument * merged trunk * Merged Dan's branch * Merged trunk * merge with trunk, resolve conflicts * merge ryu's branch * start removing references to AuthManager * change context to maintain exact time, store roles, use ids instead of objects and use a uuid for request\_id * Resolved conflict with trunk * merge trunk * Updated the compute API so that has\_finished\_migration uses instance\_uuid. Fixes some regressions with 1295-1296 * This fixes issues with invalid flavorRef's being passed in returning a 500 instead of a 400, and adds tests to verify that two separate cases work * merge from trunk * Perform fault wrapping in the openstack WSGI controller. This allows us to just raise webob Exceptions in OS API controllers with the appropriate explanations set. This resolves some inconsistencies with exception raising and returning that would cause HTML output to occur when faults weren't being handled correctly * Merged with trunk which includes ha-net changes * Updated the compute API so that has\_finished\_migration uses instance\_uuid. Fixes some regressions with 1295-1296 * allow 2 dns servers to be specified on network create * Fixes lp813006 * Fixes lp808949 - "resize doesn't work with recent novaclient" * merge with trunk * - Add 'fixed\_ipv6' property to VirtualInterface model - Expose ipv6 addresses in each network in OSAPI v1.1 * Merged trunk * Merged lp:~~danwent/nova/network-refactoring * Adds HA networking (multi\_host) option to networks * merge ryu's branch * Merged trunk * merged trunk * network api release\_floating\_ip method will now check to see if an instance is associated to it, prior to releasing * Fixes lp809587 * Merged with trunk * Reverted to original code, after network binding to project code is in integration code for testing new extension will be added * Fixes lp813006 - inconsistent DB API naming * merged from trunk * Merged trunk * Merged with trunk * merged trunk * merged trunk * fixed reviewer's comment. 1. ctxt -> context, 2. erase unnecessary exception message from nova.sccheduler.driver * merged trunk * This change adds the basic boot-from-volume support to the image service * pep8 cleanup * adding fixed\_ipv6 property to VirtualInterface model; exposing ipv6 in api * Merged with trunk * fix issues that were breaking vlan mode * added missing instance\_get\_all\_by\_vsa * VSA: first cut. merged with 1279 * merging trunk * Adds greater configuration flexibility to rate limiting via api-paste.ini. In particular: * merge with trunk * - Present ip addresses in their actual networks, not just a static public/private - Floating ip addresses are grouped into the networks with their associated fixed ips - Add addresses attribute to server entities * merge with trunk, resolve conflicts * Existing Windows agent behaves differently than the Unix agents and require some workarounds to operate properly. Fixes are going into the Windows agent to make it behave better, but workarounds are needed for compatibility with existing installed base * Merged with trunk and fixed pep errors * added integrated unit testcases and minor fixes * First pass * merging trunk * pull-up from trunk, while we're at it * Merged with Trunk * Updated responses for GET /images and GET /images/detail to respect the OSAPI v1.1 spec * merge * merge from trunk * Extends the exception.wrap\_exception decorator to optionally send an update to the notification system in the event of a failure * trunk merge * merging trunk * updating testing; simplifying instance-level code * adding test; casting instance to dict to prevent sqlalchemy errors * merged branch lp:~rackspace-titan/nova/images-response-formatting * merged trunk * merge with trunk * Starting part of multi-nic support in the guest. Adds the remove\_fixed\_ip code, but is incomplete as it needs the API extension that Vek is working on * merged trunk * fix reviewer's comment * fixed marshalling problem to cast\_compute.. * This doesn't actually fix anything anymore, as the wsgi\_refactor branch from Waldon took care of the issue. However, a couple rescue unit tests would have caught this originally, so I'm proposing this to include those * Merged with Trunk * add optional parameter networks to the Create server OS API * Made xen plugins rpm noarch * Set the proper return code for server delete requests * merging trunk * minor tweaks * Adds an extension which makes add\_fixed\_ip() available through an OpenStack extension * Fix the bug 800759 * fix conflict * Fixed up an incorrect key being used to check Zones * merged trunk * fix tests * make sure that old networks get the same dhcp ip so we don't break existing deployments * cleaned up on set network host to \_setup\_network and made networks allocate ips dynamically * Make the instance migration calls available via the API * Merged trunk * image/fake: added teardown method * merge with trunk * pull-up from trunk * pull-up from trunk * Merging issuse * implemented clean-up logic when VM fails to spawn for xenapi back-end * Adds the os-hosts API extension for interacting with hosts while performing maintenance. This differs from the previous merge prop as it uses a RESTful design instead of GET-based actions * stricter zone\_id checking * trunk merge * Merged trunk * Updated the links container for flavors to be compliant with the current spec * merged trunk * Add a socket server responding with an allowing flash socket policy for all requests from flash on port 843 to nova-vncproxy * Pull-up from trunk (post-multi\_nic) * removed extra comment * merged trunk * merge code i'd split from instance\_get\_fixed\_addresses\_v6 that's no longer needed to be split * rename \_check\_servers\_options, add some comments and small cleanup in the db get\_by\_filters call * convert filter value to a string just in case before running re.compile * pep8 fixes * test fixes.. one more to go * merged trunk * test fixes and typos * typos * cleanup checking of options in the API before calling compute\_api's get\_all() * a lot of major re-work.. still things to finish up * merged trunk * merged trunk * missing doc strings for fixed\_ip calls I renamed * merged trunk * pep8 fixes * merged trunk * added searching by 'image', 'flavor', and 'status' reverted ip/ip6 searching to be admin only * compute's get\_all should accept 'name' not 'display\_name' for searching Instance.display\_name. Removed 'server\_name' searching.. Fixed DB calls for searching to filter results based on context * clean up checking for exclusive search options fix a cut n paste error with instance\_get\_all\_by\_name\_regexp * merged trunk * fix bugs with fixed\_ip returning a 404 instance searching needs to joinload more stuff * added searching by instance name added unit tests * pep8 fixes * Replace 'like' support with 'regexp' matching done in python. Since 'like' would result in a full table scan anyway, this is a bit more flexible. Make search options and matching a little more generic Return 404 when --fixed\_ip doesn't match any instance, instead of a 500 only when the IP isn't in the FixedIps table * update tests * add ability to set multi\_host in nova-manage and remove debugging issues * pass in dhcp server address, fix a bunch of bugs * make sure to filter out ips associated by host and add some sync for allocating ip to host * First round of changes for ha-flatdhcp * fixed a bug which prevents suspend/resume after block-migration * properly displays addresses in each network, not just public/private; adding addresses attribute to server entities * after trunk merge * Found some additional fixed\_ip. entries in the Intance model contest that needed to be updated * Changed fixed\_ip.network to be fixed\_ips.network, which is the correct DB field * Added the GroupId param to any pertinent security\_group methods that support it in the official AWS API * Fixed the case where an exception was thrown when trying to get a list of flavors via the api yet there were no flavors to list * fix up tests * Update the fixed\_ip\_disassociate\_all\_by\_timeout in nova.db.api so that it supports Postgres. Fixes casting errors on postgres with this function * added multi-nic support * trunk merge with migration renumbering * Child Zone Weight adjustment available when adding Child Zones * trunk merge * merge trunk * merged trunk * Windows instances will often take a few minutes setting up the image on first boot and then reboot. We should be more patient for those systems as well check if the domid changes so we can send agent requests to the current domid * refactored instance type code * - add metadata container to /images/detail and /images/ responses - update xml serialization to encode image entities properly * merging trunk * trunk merge * Update the fixed\_ip\_disassociate\_all\_by\_timeout in nova.db.api so that it supports Postgres. Fixes casting errors on postgres with this function * phew ... working * compute\_api.get\_all should be able to recurse zones (bug 744217). Also, allow to build more than one instance at once with zone\_aware\_scheduler types. Other cleanups with regards to zone aware scheduler.. * fix issue of recurse\_zones not being converted to bool properly add bool\_from\_str util call add test for bool\_from\_str slight rework of min/max\_count check * merged trunk * pulled in koelkers test changes * merge with trey * Merged trunk * merged trunk, fixed the floating\_ip fixed\_ip exception stupidity * trunk merge * Implement backup with rotation and expose this functionality in the OS API * Merged trunk * adopt merge * moved migration again & trunk merge * Merged trunk * merging trunk * This adds system usage notifications using the notifications framework. These are designed to feed an external billing or similar system that subscribes to the nova feed and does the analysis * Refactored usage generation * merge with trey * Re-worked some of the WSGI and WSGIService code to make launching WSGI services easier, less error prone, and more testable. Added tests for WSGI server, new WSGI loader, and modified integration tests where needed * Merged trunk * pep8 fix * Adds support for "extra specs", additional capability requirements associated with instance types * resync with trunk * remerged trunk * Re-merging code for generating system-usages to get around bzr merge braindeadness * Added floating IP support in OS API * This speeds up multiple runs of tests to start up much faster because it only runs db migrations if the test db doesn't exist. It also adds the -r/--recreate-db option to run\_tests.sh to delete the tests db so it will be recreated * small formatting change * merge with trey * trunk merge, getting fierce. * Merged trunk * Added nova.version to utils.py * Pulled trunk, merged boot from ISO changes * fixed pep style * review issues fixed * merge with trunk * Upstream merge * merging trunk; adding error handling around image xml serialization * only create the db if it doesn't exist, add an option -r to run\_tests.py to delete it * Fix for bug #788265. Remove created\_at, updated\_at and deleted\_at from instance\_type dict returned by methods in sqlalchemy API * PEP8 fix * pep8 * Updated \_dict\_with\_extra\_specs docstring * Renamed \_inst\_type\_query\_to\_dict -> \_dict\_with\_extra\_specs * Merged from trunk * Add api methods to delete provider firewall rules * Removes the usage of the IPy module in favor of the netaddr module * merged * trunk merged. conflicts resolved * added disassociate method to tests * some tests and refactoring * Trunk merge fixes * Merging trunk * Merged from trunk * Merged with trunk * Unwind last commit, force anyjson to use our serialization methods * Now automatically populates the instance\_type dict with extra\_specs upon being retrieved from the database * Created Bootstrapper to handle Nova bootstrapping logic * trunk merge * updated the way vifs/fixed\_ips are deallocated and their relationships, altered lease/release fixed\_ip * This adds a way to create global firewall blocks that apply to all instances in your nova installation * merge from trunk * proper xml serialization for images * Add xml serialization for all /images//meta and /images//meta/ responses * trunk merge and migration bump * Merged markwash's fixes * Merged trunk * Returned code to original location * Merged from trunk * This catches the InstanceNotFound exception on create, and ignores it. This prevents errors in the compute log, and causes the server to not be built (it should only get InstanceNotFound if the server was deleted right after being created). This is a temporary fix that should be fixed correctly once no-db-messaging stuff is complete * added virtual\_interface\_update method * merging trunk * added fixed ip filtering by null virtual interface\_id to network get associated fixed ips * fixed ip gets now have floating IPs correctly loaded * fix some issues with flags and logging * api/ec2, boot-from-volume: an unit test for describe instances * db/block\_device\_mapping/api: introduce update\_or\_create * merge with trunk * fixed zone update * trunk merge * merge from trunk * This branch adds support to the xenapi driver for updating the guest agent on creation of a new instance. This ensures that the guest agent is running the latest code before nova starts configuring networking, setting root password or injecting files * merge from trunk * some libvirt multi-nic just to get it to work, from tushar * merge with trey * Filter out datetime fields from instance\_type * Merged trunk * added adjust child zone test * tests working again * updated the exceptions around virtual interface creation, updated flatDHCP manager comment * more trunks * another trunk merge * This patch adds support for working with instances by UUID in addition to integer IDs * Merging trunk, fixing conflicts * Cleanup and addition of tests for WSGI server * Merged trunk * Check that server exists when interacting with /v1.1/servers//meta resource * merged rev trunk 1198 * Cleanup of the cleanup * Cleaned up nova-api binary and logging a bit * General cleanup and refactor of a lot of the API/WSGI service code * Adding tests for is\_uuid\_like * Implements a portion of ec2 ebs boot. What's implemented - block\_device\_mapping option for run instance with volume (ephemeral device and no device isn't supported yet) - stop/start instance * updated fixed ip and floating ip exceptions * Merging trunk * moving instance existance logic down to api layer * bunch of docstring changes * Removes nova/image/local.py (LocalImageService) * Increased error message readability for the OpenStack API * merging trunk * Upstream merge * Rename: intance\_type\_metadata -> instance\_type\_extra\_specs * erroneous self in virtual\_interface\_delete\_by\_instance() sqlalchemy api * Renaming to \_build\_instance\_get * merged trunk * merge with trey * Merged reldan changes * First implementation of FloatingIpController * Adds 'joinedload' statements where they need to be to prevent access of a 'detached' object * syntax * Merged trunk * Added metadata joinedloads * Prep-work to begin on reroute\_compute * Adding uuid test * Pep8 Fixes * Adding UUID test * merge with nova trunk * db/block\_device\_mapping\_get\_all\_by\_instance: don't raise * PEP8 cleanups * pep8 * The Xen driver supports running instances in PV or HVM modes, but the method it uses to determine which to use is complicated and doesn't work in all cases. The result is that images that need to use HVM mode (such as FreeBSD 64-bit) end up setting a property named 'os' set to 'windows' * typo * net base project id now from context, removed incorrect floatnig ip host assignment * Phew ... ok, this is the last dist-scheduler merge before we get into serious testing and minor tweaks. The heavy lifting is largely done * merged trunk * merged trunk rev 1178 * merge with trey * - fixes bug that prevented custom wsgi serialization * merging trunk, fixing pep8 * This fixes the server\_metadata create and update functions that were returning req.body (as a string) instead of body (deserialized body dictionary object). It also adds checks where appropriate to make sure that body is not empty (and return 400 if it is). Tests updated/added where appropriate * merging trunk * trunk merge * merge trunk * fix method chaining in database layer to pass right parameters * Add a method to delete provider firewall rules * block migration feature added * floating ips can now move around the network hosts * Allows Nova to talk to multiple Glance APIs (without the need for an external load-balancer). Chooses a random Glance API for each request * forgot a comma * misc argument alterations * trunk merge and ec2 tests fixed * Add some docstrings for new agent build DB functions * Record architecture of image for matching to agent build later. Add code to automatically update agent running on instance on instance creation * tests working after merge-3 update * Pull-up from multi\_nic * merged koelkers tests branch * Merging trunk * Merged trunk * Fix merge conflict * merged trunk again * updated docstring for nova-manage network create * Now forwards create instance requests to child zones. Refactored nova.compute.api.create() to support deferred db entry creation * MySQL database tables are currently using the MyISAM engine. Created migration script nova/db/sqlalchemy/migrate\_repo/versions/021\_set\_engine\_mysql\_innodb.py to change all current tables to InnoDB * merged trunk again * Cleaned up some pylint errors * removed network\_info shims in vmops * trunk merge * merge trunk * Cleaned up some of the larger pylint errors. Set to ignore some lines that pylint just couldn't understand * pep8 * Make libvirt snapshotting work with images that don't have an 'architecture' property * take out the host * run\_instances will check image for 'available' status before attempting to create a new instance * Use True/False instead of 1/0 when setting updating 'deleted' column attributes. Fixes casting issues when running nova with Postgres * merged from trunk * Use True/False instead of 1/0 when setting updating 'deleted' column attributes.Fixes casting issues when running nova with Postgres * This branch allows marker and limit parameters to be used on image listing (index and detail) requests. It parses the parameters from the request, and passes it along to the glance\_client, which can now handle these parameters. Essentially all of the logic for the pagination is handled in glance, we just pass along the correct parameters and do some error checking * merge from trunk, resolved conflicts * Update the OSAPI images controller to use 'serverRef' for image create requests * Changed the error raise to not be AdminRequired when admin is not, in fact, required * merge with trey * Change to a more generic error and update documentation * Merged trunk * merge trunk * merge with trunk * Fixed incorrect exception * This branch removes nwfilter rules when instances are terminated to prevent resource leakage and serious eventual performance degradation. Without this patch, launching instances and restarting nova-compute eventually become very slow * merge with trunk * resolve conflicts with trunk * Update migrate script version to 22 * trunk merge after 2b hit * Distributed Scheduler developer docs * merged trunk again * paramiko is not installed into the venv, but is required by smoketests/base.py. Added paramiko to tools/pip-requires * Changes all uses of utcnow to use the version in utils. This is a simple wrapper for datetime.datetime.utcnow that allows us to use fake values for tests * Set pylint to ignore correct lines that it could not determine were correct, due to the means by which eventlet.green imported subprocess Minimized the number of these lines to ignore * LDAP optimization and fix for one small bug caused huge performance leak. Dashboard's benchmarks showed overall x22 boost in page request completion time * Adds LeastCostScheduler which uses a series of cost functions and associated weights to determine which host to provision to * trunk merge * Merged with trunk * This change set adds the ability to create new servers with an href that points to a server image on any glance server (not only the default one configured). This means you can create a server with imageRef = http://glance1:9292/images/3 and then also create one with imageRef = http://glance2:9292/images/1. Using the old way of passing in an image\_id still works as well, and will use the default configured glance server (imageRef = 3 for instance) * merged trunk * merge trunk... yay.. * make all uses of utcnow use our testable utils.utcnow * Fixing conflicts * This adds the ability to publish nova errors to an error queue * Sudo chown the vbd device to the nova user before streaming data to it. This resolves an issue where nova-compute required 'root' privs to successfully create nodes with connection\_type=xenapi * Bugfix #780784. KeyError when creating custom image * merge with trey * merged from trunk * small fixes * fix pep8 issue from merge * - move osapi-specific wsgi code from nova/wsgi.py to nova/api/openstack/wsgi.py - refactor wsgi modules to use more object-oriented approach to wsgi request handling: - Resource object steps up to original Controller position - Resource coordinates deserialization, dispatch to controller, serialization - serialization and deserialization broken down to be more testable/flexible * merge from trunk * Merged from trunk * Adds hooks for applying ovs flows when vifs are created and destroyed for XenServer instances * Fixing a bunch of conflicts * Incremented version of migration script to reflect changes in trunk * Basic hook-up to HostFilter and fixed up the passing of InstanceType spec to the scheduler * Resolving conflict and finish test\_images * merge * Merged trunk * Merged trunk and fixed conflicts * added pause/suspend implementation to nova.virt.libvirt\_conn * Update the rebuild\_instance function in the compute manager so that it accepts the arguments that our current compute API sends * Added the filtering of image queries with image metadata. This is exposing the filtering functionality recently added to Glance. Attempting to filter using the local image service will be ignored * This enables us to create a new volume from a snapshot with the EC2 api * Use a new instance\_metadata\_delete\_all DB api call to delete existing metadata when updating a server * Add vnc\_keymap flag, enable setting keymap for vnc console and fix bug #782611 * Rebased to trunk rev 1120 * trunk merge * Cleaned up text conflict * pep8 fixes * merge trunk * merge from trunk * This adds a volume snapshot support with the EC2 api * Updates so that 'name' can be updated when doing a OS API v1.1 rebuild. Fixed issue where metadata wasn't getting deleted when an empty dict was POST'd on a rebuild * Use metadata variable when calling \_metadata\_refs * Fixes to the SQLAlchmeny API such that metadata is saved on an instance\_update. Added integration test to test that instance metadata is updated on a rebuild * Fixing pep8 problems * Modified instance\_type\_create to take metadata * Added test for instance type metadata update * Adding accessor methods for instance type metadata * trunk merge * Fix a description of 'snapshot\_name\_template' * compute: implement ec2 stop/start instances * db: add a table for block device mapping * Adds the ability to make a call that returns multiple times (a call returning a generator). This is also based on the work in rpc-improvements + a bunch of fixes Vish and I worked through to get all the tests to pass so the code is a bit all over the place * Rename instances.image\_id to instances.image\_ref * merge with dietz * Virt tests passing while assuming the old style single nics * merge trunk * Essentially adds support for wiring up a swap disk when building * Merged trunk * branch 2a merge (including trunk) * trunk merge * merging trunk * merge with dietz * remove dead/duplicate code * Added test skipper class * cleanup the code for merging * lots of fixes for rpc and extra imports * almost everything working with fake\_rabbit * merge with dietz * Fixing divergence * Merged trunk * Fixed the mistyped line referred to in bug 787023 * Merged trunk and resolved conflicts * Merged with trunk * Several changes designed to bring the openstack api 1.1 closer to spec - add ram limits to the nova compute quotas - enable injected file limits and injected file size limits to be overridden in the quota database table - expose quota limits as absolute limits in the openstack api 1.1 limits resource - add support for controlling 'unlimited' quotas to nova-manage * During the API create call, the API would kick off a build and then loop in a greenthread waiting for the scheduler to pick a host for the instance. After API would see a host was picked, it would cast to the compute node's set\_admin\_password method * Merged upstream * merged trunk * Merged trunk * Created new libvirt directory, moved libvirt\_conn.py to libvirt/connection.py, moved libvirt templates, broke out firewall and network utilities * merge against 2a * trunk merge * merged recent trunk * merged recent trunk * eventlet.spawn\_n() expects the function and arguments, but it expects the arguments unpacked since it uses \*args * merge with trey * merge trunk * moved auto assign floating ip functionality from compute manager to network manager * need to return the ref * many tests pass now * Fixes some minor doc issues - misspelled flags in zones doc and also adds zones doc to an index for easier findability * Synchronise with Diablo development * zone1 merge * uhhh yea * make sure to get a results, not the query * merged from trunk * Renaming service\_image\_id vars to image\_id to reduce confusion. Also some minor cleanup * port the current create\_networks over to the new network scheme * merge trunk * merge branch lp:~rackspace-titan/nova/ram-limits * Rebased to trunk rev 1101 * merge from trunk * moved utils functions into nova/image/ * Trunk merge * Fix bug #744150 by starting nova-api on an unused port * Removing utils.is\_int() * merge trunk * merging trunk * Merged with trunk * print information about nova-manage project problems * merge from trunk * This is the groundwork for the upcoming distributed scheduler changes. Nothing is actually wired up here, so it shouldn't break any existing code (and all tests pass) * Merging trunk * Get rid of old virt/images.py functions that are no longer needed. Checked for any loose calls to these functions and found none. All tests pass for me * Update OSAPI v1.1 extensions so that it supports RequestExtensions. ResponseExtensions were removed since the new RequestExtension covers both use cases. This branch also removes some of the odd serialization code in the RequestExtensionController that converted dictionary objects into webob objects. RequestExtension handlers should now always return proper webob objects * foo * Fixed some tests * merge with trunk * Added an EC2 API endpoint that'll allow import of public key. Prior, api only allowed generation of new keys * Add new flag 'max\_kernel\_ramdisk\_size' to specify a maximum size of kernel or ramdisk so we don't copy large files to dom0 and fill up /boot/guest * Merged with trunk * merge from trunk * Merged trunk and resolved horrible horrible conflicts * waldon's naming feedback * Merging trunk * updated the hypervisors and ec2 api to support receiving lists from pluralized mac\_addresses and fixed\_ips * minor cleanup, plus had to merge because of diverged-branches issue * merge from trunk * Fix comments * merge lp:nova * default to port 80 if it isnt in the href/uri * skeleton of forwarding calls to child zones * merge trunk * Implements a basic mechanism for pushing notifications out to interested parties. The rationale for implementing notifications this way is that the responsibility for them shouldn't fall to Nova. As such, we simply will be pushing messages to a queue where another worker entirely can be written to push messages around to subscribers * get real absolute limits in openstack api and verify absolute limit responses * Merging trunk * fix pep8 issues * fixed QuotaTestCases * fixed ComputeTestCase tests * made ImageControllerWithGlanceServiceTests pass * get integrated server\_tests passing * Removed all utils.import\_object(FLAGS.image\_service) and replaced with utils.get\_default\_image\_service() * added is\_int function to utils * Pep8 fixes * updates to utils methods, initial usage in images.py * added util functions to get image service * Adding fill first cost function * Fixes the naming of the server\_management\_url in auth and tests * Merging in Sandy's changes adding Noop Cost Fn with tests * merged trunk * merge ram-limits * Fixes improper attribute naming around instance types that broke Resizes * Added missing metadata join to instance\_get calls * add ram limits to instance quotas * Convert instance\_type\_ids in the instances table from strings to integers to enable joins with instance\_types. This in particular fixes a problem when using postgresql * merge lp:nova * Re-pull changed notification branch * failure conditions are being sent back properly now * Added missing metadata join to instance\_get calls * Migrate quota schema from hardcoded columns to a key-value approach. The hope is that this change would make it easier to change the quota system without future schema changes. It also adds the concept of quotas that are unlimited * updated the mac\_address delete function to actually delete the rows, and update fixed\_ips * Added missing flavorRef and imageRef checks in the os api xml deserialization code along with tests * This branch splits out the IPv6 address generation into pluggable backends. A new flag named ipv6\_backend specifies which backend to use * Review changes and merge from trunk * merge trunk * Adds proper error handling for images that can't be found and a test for deregister image * added |fixed\_ip\_get\_all\_by\_mac\_address| and |mac\_address\_get\_by\_fixed\_ip| to db and sqlalchemy APIs * Merging in trunk * I'm assuming that openstack doesnt work with python < 2.6 here (which I read somewhere on the wiki). This patch will check to make sure python >= 2.6 is installed, and also allow it to work with python 2.7 (and greater in the future) * merge lp:nova * XenAPI was not implemented to allow for multiple simultaneous XenAPI requests. A single XenAPIConnection (and thus XenAPISession) is used for all queries. XenAPISession's wait\_for\_task method would set a self.loop = for looping calls to \_poll\_task until task completion. Subsequent (parallel) calls to wait\_for\_task for another query would overwrite this. XenAPISession.\_poll\_task was pulled into the XenAPISession.wait\_for\_task method to avoid having to store self.loop * Merged trunk * Merging in Sandy's changes * merge trunk * trunk merge * merge trunk * fixed\_ip disassociate now also unsets mac\_address\_id * Make sure imports are in alphabetical order * merged from trunk * if a LoopingCall has canceled the loop, break out early instead of sleeping any more than needed * merged from trunk * misc related network manager refactor and cleanup * merged from trunk * merge from trunk * rename quota column to 'hard\_limit' to make it simpler to avoid collisions with sql keyword 'limit' * 1 Set default paths for nova.conf and api-paste.ini to /etc/nova/ 2 Changed countryName policy because https://bugs.launchpad.net/nova/+bug/724317 still affected * Implement IPv6 address generation that includes account identifier * merge from trunk and update .mailmap file * oops fixed a docstring * more filter alignment * merge trunk * align filters on query * Merged trunk * Abstract out IPv6 address generation to pluggable backends * Merged trunk * extracted xenserver capability reporting from dabo's dist-scheduler branch and added tests * Enable RightAWS style signature checking using server\_string without port number, add test cases for authenticate() and a new helper routine, and fix lp753660 * Set root password upon XenServer instance creation * trunk merge * fix mismerge by 1059 * Host Filtering for Distributed Scheduler (done before weighing) * Rebased to trunk rev 1057 * merge from trunk * convert quota table to key-value * Simple fix for this issue. Tries to raise an exception passing in a variable that doesn't exist, which causes an error * Merged trunk * merge from trunk * Sanitize get\_console\_output results. See bug #758054 * Merged trunk * merge with trunk * merge from trunk * Merged with current trunk * Merged trunk * Adding OSAPI v1.1 limits resource * Adding support for server rebuild to v1.0 and v1.1 of the Openstack API * looking for default flagfile * merging trunk * merging trunk * Merged trunk * Merged trunk * ensure create image conforms to OS API 1.1 spec * merge updates from trunk * merged from trunk * merging trunk; resolving conflicts; fixing issue with ApiError test failing since r1043 * Implement get\_host\_ip\_addr in the libvirt compute driver * merging trunk; resolving conflicts * merging trunk * Final cleanup of nova/exceptions.py in my series of refactoring branches * Uses memcached to cache roles so that ldap is actually usable * Rebased to trunk rev 1035 * converted 1/0 comparison in db to True/False for Postgres cast compatibility * converted 1/0 comparison to True/False for Postgres compatibility * Added more unit-test for multi-nic-nova libvirt * Make the import of distutils.extra non-mandatory in setup.py. Just print a warning that i18n commands are not available.. * further cleanup of nova/exceptions.py * added eagerloading mac adddresses for instance * merge with trunk and resolve conflicts * Refactoring usage of nova.exception.NotFound * merging trunk * Make the import of distutils.extra non-mandatory in setup.py. Just print a warning that i18n commands are not available.. * Refactoring the usage of nova.exception.Duplicate * Rebased to trunk rev 1030 * merged from trunk * pep8 * merging trunk * Merged trunk and fixed simple exception conflict * merging trunk * Refactoring nova.exception.Invalid usage * adding gettext to setup.py * Use runtime XML instead of VM creation time XML for createXML() call in order to ensure volumes are attached after RebootInstances as a workaround, and fix bug #747922 * Rebased to trunk rev 1027, and resolved a conflict in nova/virt/libvirt\_conn.py * Rebased to trunk rev 1027 * clarifies error when trying to add duplicate instance\_type names or flavorids via nova-manage instance\_type * merge trunk * Rework completed. Added test cases, changed helper method name, etc * merge trunk, resolved conflict * merge trunk * Provide option of auto assigning floating ip to each instance. Depend on auto\_assign\_floating\_ip boolean flag value. False by default * Restore volume state on migration failure to fix lp742256 * Fixes cloudpipe to get the proper ip address * merging trunk * Fix bug with content-type and small OpenStack API actions refactor * merge with trunk * merge trunk * merged trunk * Merged trunk and fixed api servers conflict * Addressing exception.NotFound across the project * eager loaded mac\_address attributes for mac address get functions * Fixed network\_info creation in libvirt driver. Now creating same dict as in xenapi driver * rebase trunk * commit to push for testing * Rebased to trunk rev 1015 * Utility method reworked, etc * Docstring cleanup and formatting (nova/db dir). Minor style fixes as well * Docstring cleanup and formatting (nova dir). Minor style fixes as well * merge trunk * cleanups per code review * docstring cleanup, nova dir * docstring cleanup, nova/db dir * merge with trunk * Rebased to trunk rev 1005 * Merged trunk * trunk merged * Round 1 of pylint cleanup * Implement quotas for the new v1.1 server metadata controller * Fixes cloudpipe to get the proper ip address * Merged trunk * Add support for creating a snapshot of a nova volume with euca-create-snapshot * Add support for creating a snapshot of a nova volume with euca-create-snapshot * trunk merged * use 'is not None' instead of '!= None' * Support admin password when specified in server create requests * merge lp:nova and resolve conflicts * use 'is not None' instead of '!= None' * trunk merged * not performing floating ip operation with auto allocated ips * Rebased to trunk rev 995 * Rebased to trunk rev 995 * merge trunk * trunk merged. conflict resolved * Add additional logging for WSGI and OpenStack API authentication * Merged trunk * merging trunk * Updated following to RIck's comments * Blushed up a little bit * Merged lp:~rackspace-titan/nova/server\_metadata\_quotas as a prereq * Merged trunk * migaration and pep8 fixes * Merged trunk * merge trunk * network manager changes, compute changes, various other * Floating ips auto assignment * Rebase to trunk rev 937 * merge trunk * Rebased to trunk rev 973 * merge trunk * resolved lazy\_match conflict between bin/nova-manage instance and instance\_type by moving instance subcommand under vm command. documented vm command in man page. removed unused instance\_id from vm list subcommand * Rebased to trunk rev 971 * Reabased to trunk rev 971 * There is a race condition when a VDI is mounted and the device node is created. Sometimes (depending on the configuration of the Linux distribution) nova loses the race and will try to open the block device before it has been created in /dev * merge trunk * removes log command from nova-manage as it no longer worked in multi-log setup * corrects incorrect openstack api responses for metadata (numeric/string conversion issue) and image format status (not uppercase) * Implement a mechanism to enforce a configurable quota limit for image metadata (properties) within the OS API image metadata controller * merge trunk * merge trunk * Fixes issues with describe instances due to improperly set metadata * Added support for listing addresses of a server in the openstack api. Now you can GET \* /servers/1/ips \* /servers/1/ips/public \* /servers/1/ips/private Supports v1.0 json and xml. Added corresponding tests * This fixes how the metadata and addresses collections are serialized in xml responses * merged trunk * merged trunk and resolved conflict * Update instances table to use instance\_type\_id instead of the old instance\_type column which represented the name (ex: m1.small) of an instance type * Remove and from AllocateAddress response, and fix bug #751176 * Blush up a bit * Rebased to trunk rev 949 * Rebased to trunk rev 949 * pep8 cleanup * merged trunk * Merged trunk * Support providing an XML namespace on the XML output from the OpenStack API * Merged with trunk, fixed up test that wasn't checking namespace * Enable RightAWS style signing on server\_string without port number portion * Improved unit tests Fixed docstring formatting * Only create ca\_path directory if it does not already exist * Make "setup.py install" much more thorough. It now installs tools/ into /usr/share/nova and makes sure api-paste.conf lands in /etc/nova rather than /etc * merged trunk * merged trunk * Moved 'name' from to , corrected and fixes bug # 750482 * Separate CA/ dir into code and state * Add a find\_data\_files method to setup.py. Use it to get tools/ installed under /usr/(local/)/share/nova * Allow CA code and state to be separated, and make sure CA code gets installed by setup.py install * Rebased to trunk 942 * merge trunk * Refactor so that instances.instance\_type is now instances.instance\_type\_id * merging trunk * Declares the flag for vncproxy\_topic in compute.api * fixes incorrect case of OpenStack API status response * merge trunk * Added synchronize\_session parameter to a query in fixed\_ip\_disassociate\_all\_by\_timeout() and fix #735974 * Added updated\_at field to update statement according to Jay's comment * Rebased to trunk 930 * merge trunk * Add a change password action to /servers in openstack api v1.1, and associated tests * merge lp:nova * Rebased to trunk rev 925 * Merged with trunk (after faults change to return correct content-type) * OpenStack API faults have been changed to now return the appropriated Content-Type header * Implement quotas for the new v1.1 server metadata controller. Modified the compute API so that metadata is a dict (not an array) to ensure we are using unique key values for metadata. This is isn't explicit in the SPECs but it is implied by the new v1.1 spec since PUT requests modify individual items * Merged with trunk * Merged with trunk * Added synchronize\_session parameter to a query in fixed\_ip\_disassociate\_all\_by\_timeout() and fix #735974 * Merged trunk * merge trunk * merged trunk * The VNC Proxy is an OpenStack component that allows users of Nova to access their instances through a websocket enabled browser (like Google Chrome) * Support for volumes in the OpenStack API * Merged with trunk * add nova-vncproxy to setup.py * This branch adds support for linux containers (LXC) to nova. It uses the libvirt LXC driver to start and stop the instance * Glance used to return None when a date field wasn't set, now it returns ''. Glance used to return dates in format "%Y-%m-%dT%H:%M:%S", now it returns "%Y-%m-%dT%H:%M:%S.%f" * Adds support for versioned requests on /images through the OpenStack API * Merged trunk * Added VLAN networking support for XenAPI * Merged with trunk * Merged trunk * merge trunk * merged from trunk * merge lp:nova * merge trunk * merge trunk * Merged trunk * merge with trunk * merge lp:nova * Mixins for tests confuse pylint no end, and aren't necessary... you can stop the base-class from being run as a test by prefixing the class name with an underscore * Merged with trunk * merge trunk * merge trunk, fixed conflicts * merge trunk addressing Trey's comments * Merged with trunk, resolved conflicts & code-flicts * merged trunk * merge trunk * merge lp:nova * Adding links container to openstack api v1.1 servers entities * Merged trunk * Merged trunk * merging trunk * merge trunk * Merged trunk and fixed broken/conflicted tests * - add a "links" container to versions entities for Openstack API v1.1 - add testing for the openstack api versions resource and create a view builder * merging trunk * This is basic network injection for XenServer, and includes: * merging trunk * Implement image metadata controller for the v1.1 OS API * merging trunk * merging trunk, resolving conflicts * Add a "links" container to flavors entities for Openstack API v1.1 * merge trunk * merge trunk * merging trunk and resolving conflicts * Implement metadata resource for Openstack API v1.1. Includes: -GET /servers/id/meta -POST /servers/id/meta -GET /servers/id/meta/key -PUT /servers/id/meta/key -DELETE /servers/id/meta/key * merge trunk, add unit test * merge trunk * merge trunk addressing reviewer's comments * Support for markers for pagination as defined in the 1.1 spec * merge trunk * Ports the Tornado version of an S3 server to eventlet and wsgi, first step in deprecating the twistd-based objectstore * Merged with trunk Updated net injection for xenapi reflecting recent changes for libvirt * Support for markers for pagination as defined in the 1.1 spec * port the objectstore tests to the new tests * update test base class to monkey patch wsgi * merge trunk * Implementation of blueprint hypervisor-vmware-vsphere-support. (Link to blueprint: https://blueprints.launchpad.net/nova/+spec/hypervisor-vmware-vsphere-support) * Adds serverId to OpenStack API image detail per related\_image blueprint * Implement API extensions for the Openstack API. Based on the Openstack 1.1 API the following types of extensions are supported: * Merging trunk * Adds unit test coverage for XenAPI Rescue & Unrescue * libvirt driver multi\_nic support. In this phase libvirt can work with and without multi\_nic support, as in multi\_nic support for xenapi: https://code.launchpad.net/~tr3buchet/nova/xs\_multi\_nic/+merge/53458 * Merging trunk * Merged trunk * style and spacing fixed * Merged with trunk, fix problem with behaviour of (fake) virt driver when instance doesn't reach scheduling * In this branch we are forwarding incoming requests to child zones when the requested resource is not found in the current zone * trunk merge * Fixes a bug that was causing tests to fail on OS X by ensuring that greenthread sleep is called during retry loops * Merged trunk * Fix some errors that pylint found in nova/api/openstack/servers.py * Merged trunk * Pylint 'Undefined variable' E0602 error fixes * Made service\_get\_all()'s disabled parameter default to None. Pass False for enabled services; True for disabled services. Calls to this method have been updated to remain consistent * Merged with trunk * Merged trunk and resolved conflict in nova/db/sqlalchemy/api.py * change names for consistency with existing db api * Merged with trunk * Aggregates capabilities from Compute, Network, Volume to the ZoneManager in Scheduler * merged trunk r864 * merging trunk r864 * trunk merged. conflicts resolved * Merged trunk * merge trunk * Small refactor * merging trunk r863 * Merged trunk * trunk merge * merge trunk * merge trunk * Pass a fake timing source to live\_migration\_pre in every test that expectes it to fail, shaving off a whole minute of test run time * merge trunk * Poll instance states periodically, so that we can detect when something changes 'behind the scenes' * Merged with conflict and resolved conflict (with my own patch, no less) * Merged with trunk * Added a mechanism for versioned controllers for openstack api versions 1.0/1.1. Create servers in the 1.1 api now supports imageRef/flavorRef instead of imageId/flavorId * Merged trunk * Merged trunk * Fix issues with certificate updating & whitespace removal * Offers the ability to run a periodic\_task that sweeps through rescued instances older than 24 hours and forcibly unrescues them * Merged trunk * merge trunk * Merged with lp:nova, fixed conflicts * Move all types of locking into utils.synchronize decorator * Better method name * small fix * Added docstring * Updates the previously merged xs\_migration functionality to allow upsizing of the RAM and disk quotas for a XenServer instance * Fix lp735636 by standardizing the format of image timestamp properties as datetime objects * migration gateway\_v6 to network\_info * fix utils.execute retries for osx * Merged trunk * Automatically unrescue instances after a given timeout * trunk merge * Unit test cleanup * trunk merged * Merged trunk * Merged trunk * id -> instance\_id * merged with trunk Updated xenapi network injection for IPv6 Updated unit tests * merge trunk * merge trunk * Merging trunk * Merged with lp:nova * Merged with lp:nova * Filtering images by user\_id now * Added space in between # and TODO in #TODO * Enable flat manager support for ipv6 * Adding a talk bubble to the nova.openstack.org site that points readers to the 2011.1 site and the docs.openstack.org site - similar to the swift.openstack.org site. I believe it helps people see more sites are available, plus they can get to the Bexar site if they want to. Going forward it'll be nice to use this talk bubble to point people to the trunk site from released sites * Test the login behavior of the OpenStack API. Uncovered bug732866 * trunk merge * Renamed check\_instance -> check\_isinstance to make intent clearer * Fix some crypto strangeness (\n in file\_name field of certificates, wrong IMPL method for certificate\_update) * pep8 and fixed up zone-list * Pep8 fix * Merging trunk * Adding BASE\_IMAGE\_ATTRS to ImageService * Changed default for disabled on service\_get\_all to None. Changed calls to service\_get\_all so that the results should still be as they previously were * Resolved conflicts * Remove unused global semaphore * Addressed reviewer's comments * Merged trunk * When updating or creating set 'delete = 0'. (thus reactivating a deleted row) Filter by 'deleted' on delete * merging trunk r843 * merging trunk r843 * merging trunk r843 * Make synchronized decorator not leak semaphores, at the expense of not being truly thread safe (but safe enough for Eventlet style green threads) * merge trunk * Make synchronized support both external (file based) locks as well as internal (semaphore based) locks. Attempt to make it native thread safe at the expense of never cleaning up semaphores * merge with trunk * xenapi support for multi\_nic. This is a phase of multi\_nic which allows xenapi to work as is and with multi\_nic. The other virt driver(s) need to be updated with the same support * merge lp:nova * wrap and log errors getting image ids from local image store * merge lp:nova * merging trunk * Provide more useful exception messages when unable to load the virtual driver * Openstack api 1.0 flavors resource now implemented to match the spec * merging trunk r837 * zones3 and trunk merge * trunk merge * merge with trunk * merge trunk * merge trunk * merge trunk * fixes nova-manage instance\_type compatibility with postgres db * Make smoketests' exit code reveal whether they were succesful * merge trunk * fix nova-manage instance\_type list for postgres compatibility * Merged trunk * merge lp:nova * merge trunk * uses True/False instead of 1/0 for Postgres compatibility * Cleanup of FakeAuthManager * Replaced all pylint "disable-msg=" with "disable=" and "enable-msg=" with "enable=" * Re-implementation (or just implementation in many cases) of Limits in the OpenStack API. Limits is now available through /limits and the concept of a limit has been extended to include arbitrary regex / http verb combinations along with correct XML/JSON serialization. Tests included * merge with trunk * Mark instance metadata as deleted when we delete the instance * Fixed 'Undefined variable' errors generated by pylint (E0602) * Merged trunk * disable-msg -> disable * merge trunk * merge trunk * merge trunk * Implement metadata resource for Openstack API v1.1. Includes: -GET /servers/id/meta -POST /servers/id/meta -GET /servers/id/meta/key -PUT /servers/id/meta/key -DELETE /servers/id/meta/key * Merged trunk * Merged dependant branch lp:~rackspace-titan/nova/openstack-api-versioned-controllers * fixed up bzr mess * refactored out middleware, now it's a decorator on service.api * Fix a couple of things that assume that libvirt == kvm/qemu * Make utils.execute not overwrite std{in,out,err} args to Popen on retries. Make utils.execute reject unknown kwargs * merged trunk, merged qos, slight refactor regarding merges * - general approach for openstack api versioning - openstack api version now preserved in request context - added view builder classes to handle os api responses - added imageRef and flavorRef to os api v1.1 servers - modified addresses container structure in os api v1.1 servers * merge * Mark instance metadata as deleted when we delete the instance * Backfix of bugfix of issue blocking creating servers with metadata * Add support for network QoS (ratelimiting) for XenServer. Rate is pulled from the flavor (instance\_type) when constructing a vm * Improved exception handling * merging parent branch lp:~bcwaldon/nova/osapi-flavors-1\_1 * merging parent branch lp:~rackspace-titan/nova/openstack-api-version-split * merged trunk * merge trunk * Merged trunk * merge with trunk. moved scheduler\_manager into manager. fixed tests * pep8 * Remerge trunk * cleanup * moved scheduler API check into db.api decorator * MErge trunk * foo * hurr * hurr * Log the use of utils.synchronized * expanding osapi flavors tests; rewriting flavors resource with view builders; adding 1.1 specific links to flavors resources * Fix lp727225 by adding support for personality files to the openstack api * merge lp:nova and resolve conflicts * Merging trunk * Don't generate insecure passwords where it's easy to use urandom instead * merge trunk * merge trunk * added new class Instances for managaging instances added new method list in class Instances: * Merged with trunk (and brian's previous fixes to fake auth) * Add logging to lock check * Merged trunk * Use random.SystemRandom for easy secure randoms, configurable symbol set by default including mixed-case * merge lp:nova * Fixed bugs in bug fix (plugin call) * exception fixup * merged with trunk and removed conflicts * Merging trunk * Merged with trunk. Had to hold bazaar's hand as it got lost again * Fixed problem with metadata creation (backported fix) * Clarify the logic in using 32 symbols * Don't generate insecure passwords where it's easy to use urandom instead * Fixing API per spec, to get unit-tests to pass * merge trunk * Initial implementation of refresh instance states * Adding instance\_id as Glance image\_property * removed conflicts and merged with trunk * committing to share * NTT's live-migration branch, merged with trunk, conflicts resolved, and migrate file renamed * Test fixes and some typos * merge trunk * merge trunk * Make nova-dhcpbridge output lease information in dnsmasq's leasesfile format * Merged my doc changes with trunk * Make utils.execute not overwrite std{in,out,err} args to Popen on retries. Make utils.execute reject unknown kwargs * merge trunk * Merged with trunk * merged with latest trunk and removed unwanted files * Use a consistent naming scheme for XenAPI variables * fixed conflicts after merging with trunk with 787 * Replace raw SQL calls through session.execute() with SQLAlchemy code * Remove vish comment * Merged trunk * This change adds the ability to boot Windows and Linux instances in XenServer using different sets of vm-params * merge trunk * Changes the output of status in describe\_volumes from showing the user as the owner of the volume to showing the project as the owner * merge trunk * Adds in multi-tenant support to openstack api. Allows for multiple accounts (projects) with admin api for creating accounts & users * remerge trunk (again). fix issues caused by changes to deserialization calls on controllers * Minor stylistic updates affecting indentation * merge from trunk.. * Discovered literal\_column(), which does exactly what I need * Merged trunk * merge trunk * merge lp:nova * merge trunk * Add a new IptablesManager that takes care of all uses of iptables * Last un-magiced session.execute() replaced with SQLAlchemy code.. * PEP8 * Partial revert of one conversion due to phantom magic exception from SQLAlchemy in unrelated code; convert all deletes * Correct a misspelling * merge lp:nova * merge trunk * Introduces the ZoneManager to the Scheduler which polls the child zones and caches their availability and capabilities * merge trunk * merge lp:nova and add stub image service to quota tests as needed * merged to trunk rev781 * Modifies S3ImageService to wrap LocalImageService or GlanceImageService. It now pulls the parts out of s3, decrypts them locally, and sends them to the underlying service. It includes various fixes for image/glance.py, image/local.py and the tests * merged trunk * fixed based on reviewer's comment * Merged trunk * Replace session.execute() calls performing raw UPDATE statements with SQLAlchemy code, with the exception of fixed\_ip\_disassociate\_all\_by\_timeout() * merge lp:nova * merge, resolve conflicts, and update to reflect new standard deserialization function signature * Fixes doc build after execvp patch * - Content-Type and Accept headers handled properly - Content-Type added to responses - Query extensions no long cause computeFaults - adding wsgi.Request object - removing request-specific code from wsgi.Serializer * Fixes bug 726359. Passes unit tests * merge lp:nova, fix conflicts, fix tests * merge lp:nova and resolve conflicts * Hi guys * Update the create server call in the Openstack API so that it generates an 'adminPass' and calls set\_admin\_password in the compute API. This gets us closer to parity with the Cloud Servers v1.0 spec * Merged trunk * execvp passes pep8 * merge trunk * Add a decorator that lets you synchronise actions across multiple binaries. Like, say, ensuring that only one worker manipulates iptables at a time * merge lp:nova * Fixes bug #729400. Invalid values for offset and limit params in http requests now return a 400 response with a useful message in the body. Also added and updated tests * Fixes uses of process\_input * merged trunk r771 * Fixed pep8 issues * remerge trunk * merge lp:nova and resolve conflicts * merge trunk * Merged with trunk Updated exception handling according to spawn refactoring * execvp: unit tests pass * merged to trunk rev 769 * execvp: almost passes tests * Refactoring nova-api to be a service, so that we can reuse it in unit tests * merge trunk * Fixes lp730960 - mangled instance creation in virt drivers due to improper merge conflict resolution * Use disk\_format and container\_format in place of image type * Merging trunk * Fix the bug where fakerabbit is doing a sort of prefix matching on the AMQP routing key * merge trunk * merged trunk * Remerged trunk. fixed conflict * Added ability to remove networks on nova-manage command * This fix is an updated version of Todd's lp720157. Adds SignatureVersion checking for Amazon EC2 API requests, and resolves bug #720157 * execvp * Merged trunk * deleted network\_is\_associated from nova.db api * added network\_get\_by\_cidr method to nova.db api * Log failed command execution if there are more retry attempts left * Implementation for XenServer migrations. There are several places for optimization but I based the current implementation on the chance scheduler just to be safe. Beyond that, a few features are missing, such as ensuring the IP address is transferred along with the migrated instance. This will be added in a subsequent patch. Finally, everything is implemented through the Openstack API resize hooks, but actual resizing of the instance RAM and hard drive space is not yet implemented * Merged with current trunk * Resolving excess conflicts due to criss-cross in branch history * Rebased to nova revision 761 * \* Updated readme file with installation of suds-0.4 through easy\_install. \* Removed pass functions \* Fixed pep8 errors \* Few bug fixes and other commits * merged trunk * merge trunk * remove ensure\_b64\_encoding * Merged to trunk rev 759 * Merged trunk rev 758 * merge lp:nova * Merged with Trunk * This fix changes a tag contained in the DescribeKeyPairs response from to so that Amazon EC2 access libraries which does more strict syntax checking can work with Nova * some comments are modified * Merged to trunk rev 757. Main changes are below. 1. Rename db table ComputeService -> ComputeNode 2. nova-manage option instance\_type is reserved and we cannot use option instance, so change instance -> vm * Remerged trunk, fixed a few conflicts * Add in multi-tenant support in openstack api * merged to trunk rev757 * Merged to rev 757 * merges dynamic instance types blueprint (http://wiki.openstack.org/ConfigureInstanceTypesDynamically) and bundles blueprint (https://blueprints.launchpad.net/nova/+spec/flavors) * merged trunk * Very simple change checking for < 0 values in "limit" and "offset" GET parameters. If either are negative, raise a HTTPBadRequest exception. Relevant tests included * Fixes Bug #715424: nova-manage : create network crashes when subnet range provided is not enough , if the network range cannot fit the parameters passed, a ValueError is raised * changed \_context * Provide the ability to rescue and unrescue a XenServer instance * merged trunk * Changed ra\_server to gateway\_v6 and removed addressv6 column from fixed\_ips db table * merging trunk * Merged trunk * Fixed pep8 issues, applied jaypipes suggestion * Rebased to nova revision 752 * Use functools.wraps to make sure wrapped method's metadata (docstring and name) doesn't get mangled * merge from trunk * Merged trunk * merged to trunk rev 752 * Rebased at lp:nova 759 * 1. merged trunk rev749 2. rpc.call returns '/' as '\/', so nova.compute.manager.mktmpfile, nova.compute.manager.confirm.tmpfile, nova.scheduler.driver.Scheduler.mounted\_on\_same\_shared\_storage are modified followed by this changes. 3. nova.tests.test\_virt.py is modified so that other teams modification is easily detected since other team is using nova.db.sqlalchemy.models.ComputeService * This branch implements the openstack-api-hostid blueprint: "Openstack API support for hostId" * replaced ugly INSTANCE\_TYPE constant with (slightly less ugly) stubs * Add a lock\_path flag for lock files * refactored nova-manage list (-all, ) and fixed docs * merge trunk * Adds VHD build support for XenServer driver * Merging trunk to my branch. Fixed a conflict in servers.py * Merging trunk * 1) merge trunk 2) removed preconfigure\_xenstore 3) added jkey for broadcast address in inject\_network\_info 4) added 2 flags: 4.1) xenapi\_inject\_image (default True) This flag allows for turning off data injection by mounting the image in the VDI (agreed with Trey Morris) 4.2) xenapi\_agent\_path (default /usr/bin/xe-update-networking) This flag specifies the path where the agent should be located. It makes sense only if the above flag is True. If the agent is found, data injection is not performed * merge trunk * Add utils.synchronized decorator to allow for synchronising method entrance across multiple workers on the same host * execute: shell=True removed * Rebased to Nova revision 749 * fixed FIXME * merge with zones2 fixes and trunk * trunk merge * trunk merge, pip-requires and novatools to novaclient changes * Fixes FlatDHCP by making it inherit from NetworkManager and moving some methods around * merged trunk * Add tests for 718999, fix a little brittle code introduced by the committed fix * Copy over to current trunk my tests, the 401/500 fix, and a couple of fixes to the committed fix which was actually brittle around the edges.. * I'm working on consolidating install instructions specifically (they're the most asked-about right now) and pointing to the docs.openstack.org site for admin docs * Merged trunk * Merging trunk, conflicts fixed * Rebased at lp:nova 740 * merged with trunk * Cleanup db method names for dealing with auth\_tokens to follow standard naming pattern * Pass id of token to be deleted to the db api, not the actual object * Rename auth\_token db methods to follow standard * Merging trunk, small fixes * IPV6 FlatManager changes * Make tests start with a clean database for every test * merge trunk * merge trunk * previous trunk merge * merge clean db * merged trunk * merge trunk * Merged trunk * Support HP/LeftHand SANs. We control the SAN by SSHing and issuing CLIQ commands. Also improved the way iSCSI volumes are mounted: try to store the iSCSI connection info in the volume entity, in preference to doing discovery. Also CHAP authentication support * merge trunk * Merged with trunk * Adds colors to output of tests and cleans up run\_tests.py * Reverted bad-fix to sqlalchemy code * Merged with trunk * merged upstream * merged trunk * Helper function that supports XPath style selectors to traverse an object tree e.g * Rename minixpath\_select to get\_from\_path * Fixes the describe\_availability\_zones to use an elevated context when getting services and the db calls to pass parameters correctly so is\_admin check works * fix describe\_availability\_zones * Cope when we pass a non-list to xpath\_select - wrap it in a list * Fixes existing smoketests and splits out sysadmin tests from netadmin tests * Created mini XPath implementation, to simplify mapping logic * move the deletion of the db into fixtures * merged upstream * Fixes and optimizes filtering for describe\_security\_groups. Also adds a unit test * merged trunk * merged trunk * use flags for sqlite db names and fix flags in dhcpbridge * merged trunk * The proposed branch prevents FlatManager from executing network initialisation tasks contained in linux\_net.init\_host(), which are unnecessary when flat networking is used * merged trunk * merge trunk * Initial support for per-instance metadata, though the OpenStack API. Key/value pairs can be specified at instance creation time and are returned in the details view. Support limits based on quota system * Merged trunk * Fixes lots of errors in the unit tests * Merged trunk * speed up network tests * merged trunk * move db creation into fixtures and clean db for each test * Lots of test fixing * Don't blindly concatenate queue name if second portiion is None * merged trunk * Merged with trunk, including manual conflict resolution in nova/virt/disk.py and nova/virt/xenapi/vmops.py * Fix DescribeRegion answer by introducing '{ec2,osapi}\_listen' flags instead of overloading {ec2,osapi}\_host. Get rid of paste\_config\_to\_flags, bin/nova-combined. Adds debug FLAGS dump at start of nova-api * Also remove nova-combined from setup.py * Merged trunk * no, really fix lp721297 this time * Fixed based on reviewer's comment. 1. Change docstrings format 2. Fix comment grammer mistake, etc * Fixes various issues regarding verbose logging and logging errors on import * merged trunk * Some quick test cleanups, first step towards standardizing the way we start services in tests * merged to trunk rev709. NEEDS to be fixed based on 3rd reviewer's comment * Fixed based on reviewer's comment. 1. DB schema change vcpu/memory/hdd info were stored into Service table. but reviewer pointed out to me creating new table is better since Service table has too much columns * update based on prereq branch * fixed newline and moved import fake\_flags into run\_tests where it makes more sense * Merged with head * remove keyword argument, per review * add a start\_service method to our test baseclass * Merged with trunk * switch to explicit call to logging.setup() * merged trunk * Adds translation catalogs and distutils.extra glue code that automates the process of compiling message catalogs into .mo files * Removing duplicate installation docs and adding flag file information, plus pointing to docs.openstack.org for Admin-audience docs * PEP8 errors and remove check in authors file for nova-core, since nova-core owns the translation export branch * Merged trunk * PEP-8 fixes * merged with nova trunk revision #706 * get rid of initialized flag * move the fake initialized into fake flags * fixes for various logging errors and issues * Pep8 cleanup * Introduce IptablesManager in linux\_net. Port every use of iptables in linux\_net to it * Merging trunk to my branch. Fixed conflicts in Authors file and .mailmap * Merging trunk * added functionality to list only fixed ip addresses of one node and added exception handling to list method * fixed based on reviewer's comment. 1. erase wrapper function(remove/exists/mktempfile) from nova.utils. 2. nova-manage service describeresource(->describe\_resource) 3. nova-manage service updateresource(->update\_resource) 4. erase "my mistake print" statement * merged trunk * Correctly pass the associate paramater for project\_get\_network through the IMPL layer in the db api * Merged with trunk * Initial support for per-instance metadata, though the OpenStack API. Key/value pairs can be specified at instance creation time and are returned in the details view. Support limits based on quota system * Added support for feature parity with the current Rackspace Cloud Servers practice of "injecting" files into newly-created instances for configuration, etc. However, this is in no way restricted to only writing files to the guest when it is first created * Correctly pass the associate paramater to project\_get\_network * Uncommitted changes using the wrong author, and re-committing under the correct author * Added http://mynova/v1.0/zones/ api options for add/remove/update/delete zones. child\_zones table added to database and migration. Changed novarc vars from CLOUD\_SERVERS\_\* to NOVA\_\* to work with novatools. See python-novatools on github for help testing this * merge with zone phase 1 * merged lp:~jk0/nova/dynamicinstancetypes * changed from 003-004 migration * Merged trunk * Hi guys * Rebased at lp:nova 688 * Update the Openstack API so that it returns 'addresses' * I have a bug fix, additional tests for the \`limiter\` method, and additional commenting for a couple classes in the OpenStack API. Basically I've just tried to jump in somewhere to get my feet wet. Constructive criticism welcome * added labels to networks for use in multi-nic added writing network data to xenstore param-list added call to agent to reset network added reset\_network call to openstack api * Add a command to nova-manage to list fixed ip's * comments + Englilish, changed copyright in migration, removed network\_get\_all from db.api (vestigial) * example: * Merged trunk * added new functionality to list all defined fixed ips * Merged trunk and fixed conflict with other Brian in Authors * Rebased at lp:nova 687 * added i18n of 'No networks defined' * fixed * Merging trunk * Better exceptions * -from migrate.versioning import exceptions as versioning\_exceptions + +try: + from migrate.versioning import exceptions as versioning\_exceptions +except ImportError: + try: + # python-migration changed location of exceptions after 1.6.3 + # See LP Bug #717467 + from migrate import exceptions as versioning\_exceptions + except ImportError: + sys.exit(\_("python-migrate is not installed. Exiting.")) * Merged to trunk * Use RotatingFileHandler instead of FileHandler * Use a threadpool for handling requests coming in through RPC * Typos * Derp * fixed authors, import sys in migration.py * Merged trunk * added functionality to nova-manage to list created networks * I fail at sessions * I fail at sessions * Foo * Merging trunk part 1 * merge with trunk * merging trunk back in; updating Authors conflict * Merged lp:nova * Fixes tarball contents by adding missing scripts and files to setup.py / MANIFEST.in * The proposed fix puts a VM which fails to spawn in a (new) 'FAILED' power state. It does not perform a clean-up. This because the user needs to know what has happened to the VM he/she was trying to run. Normally, API users do not have access to log files. In this case, the only way for the user to know what happened to the instance is to query its state (e.g.: doing euca-describe-instances). If we perform a complete clean-up, no information about the instance which failed to spawn will be left * Use eventlet.green.subprocess instead of standard subprocess * Adds Distutils.Extra support, removes Babel support, which is half-baked at best * Adding missing scripts and files to setup.py / MANIFEST.in * fixed nova-combined debug hack and renamed ChildZone to Zone * fixed merge conflict * better filtering * Added try clause to handle changed location of exceptions after 1.6.3 in python-migrate LP Bug #717467 * Use eventlet.green.subprocess instead of standard subprocess * merged recent version. no conflict, no big/important change to this branch * merge jk0 branch (with trunk merge) which added additional columns for instance\_types (which are openstack api specific) * corrected model for table lookup * Derp * merging with trunk * Merged trunk * Modified S3ImageService to return the format defined in BaseService to allow EC2 API's DescribeImages to work against Glance * Merged trunk * More typos * More typos * More typos * More typos * fixed exceptions import from python migrate * This fixes a lazy-load issue in describe-instances, which causes a crash. The solution is to specifically load the network table when retrieving an instance * added instance\_type\_purge() to actually remove records from db * updated tests and added more error checking * Merged trunk * joinedload network so describe\_instances continues to work * First, not all * Merged to trunk and fixed merge conflict in Authors * fixed destroy calls * Forgot the metadata includes * added get IPs by instance * forgot to add network\_get\_all\_by\_instance to db.api * template adjusted to NOVA\_TOOLS, zone db & os api layers added * trunk merge * 1. Merged to rev654(?) 2. Fixed bug continuous request. if user continuouslly send live-migration request to same host, concurrent request to iptables occurs, and iptables complains. This version add retry for this issue * Pass timestamps to the db layer in fixed\_ip\_disassociate\_all\_by\_timeout rather than converting to strings ahead of time, otherwise comparison between timestamps would often fail * Added support for 'SAN' style volumes. A SAN's big difference is that the iSCSI target won't normally run on the same host as the volume service * added support to pull list of ALL instance types even those that are marked deleted * Fix PEP8 violations * Don't convert datetime objects to a string using .isoformat(). Leave it to sqlalchmeny (or pysqlite or whatever it is that does the magic) to work it out * added testing for instance\_types.py and refactored nova-manage to use instance\_types.py instead of going directly to db * additional error checking for nova-manage instance\_type * Automates the setup for FlatDHCP regardless of whether the interface has an ip address * Changes and bug fixes * merge with lp:nova * merge source and remove ifconfig * Catching all socket errors in \_get\_my\_ip, since any socket error is likely enough to cause a failure in detection * added INSTANCE\_TYPES to test for compatibility with current tests * require user context for most flavor/instance\_type read calls * added network\_get\_all\_by\_instance(), call to reset\_network in vmops * simplified instance\_types db calls to return entire row - we may need these extra columns for some features and there seems to be little downside in including them. still need to fix testing calls * updated api.create to use instance\_type table * instance\_types should return in predicatable order (by name currently) * corrected db.instance\_types to return expect dict instead of lists. updated openstack flavors to expect dicts instead of lists. added deleted column to returned dict * converted openstack flavors over to use instance\_types table. a few pep changes * added FIXME(kpepple) comments for all constant usage of INSTANCE\_TYPES. updated api/ec2/admin.py to use the new instance\_types db table * Added a bunch of stubbed out functionality * Moved ssh\_execute to utils; moved comments to docstring * Fixes for Vish & Devin's feedback * Fixes https://bugs.launchpad.net/nova/+bug/681417 * merging * Fixed PEP8 test problems, complaining about too many blank lines at line 51 * flagged all INSTANCE\_TYPES usage with FIXME comment. Added basic usage to nova-manage (needs formatting). created api methods * Fixes bug #709057 * merge trunk * Merged trunk * Match the initial db version to the actual Austin release db schema * 1. Discard nova-manage host list Reason: nova-manage service list can be replacement. Changes: nova-manage * fix austin->bexar db migration * incorporate feedback from devin - use sql consistently in instance\_destroy also, set deleted\_at * merge trunk * Makes having sphinx to build docs a conditional thing - if you have it, you can get docs. If you don't, you can't * Fixed a pep8 spacing issue * fixes for bug #709057 * Working on api / manager / db support for zones * Adds security group output to describe\_instances * Use firewall\_driver flag as expected with NWFilterFirewall. This way, either you use NWFilterFirewall directly, or you use IptablesFirewall, which creates its own instance of NWFilterFirewall for the setup\_basic\_filtering command. This removes the requirement that LibvirtConnection would always need to know about NWFirewallFilter, and cleans up the area where the flag is used for loading the firewall class * Added a test that checks for localized strings in the source code that contain position-based string formatting placeholders. If found, an exception message is generated that summarizes the problem, as well as the location of the problematic code. This will prevent future trunk commits from adding localized strings that cannot be properly translated * Makes sure all instance and volume commands that raise not found are changed to show the ec2\_id instead of the internal id * Fixed formatting issues in current codebase * Fixes NotFound messages in api to show the ec2\_id * adding testcode * Fix Bug #703037. ra\_server is None * merge trunk * Changed method signature of create\_network * merged r621 * Merged with http://bazaar.launchpad.net/~vishvananda/nova/lp703037 * Merged trunk * Simple little changes related to openstack api to work better with glance * This branch updates docs to reflect the db sync addition. It additionally adds some useful errors to nova-manage to help people that are using old guides. It wraps sqlalchemy errors in generic DBError. Finally, it updates nova.sh to use current settings * Add a host argument to virt drivers's init\_host method. It will be set to the name of host it's running on * merged trunk * Wraps the NotFound exception at the api layer to print the proper instance id. Does the same for volume. Note that euca-describe-volumes doesn't pass in volume ids properly, so you will get no error messages on euca-describe-volumes with improper ids. We may also need to wrap a few other calls as well * Fixes issue with SNATTING chain not getting created or added to POSTROUTING when nova-network starts * Fix for bug #702237 * another trunk merge * This patch: * Trunk merged * Add a host argument to virt driver's init\_host method. It will be set to the name of host it's running on * Adds conditional around sphinx inclusion * merge with trunk * Fixes project and role checking when a user's naming attribute is not uid * Merged with r606 * Fixed merge conflict * Localized strings that employ formatting should not use positional arguments, as they prevent the translator from re-ordering the translated text; instead, they should use mappings (i.e., dicts). This change replaces all localized formatted strings that use more than one formatting placeholder with a mapping version * merged ntt branch * merged branch to name net\_manager.create\_networks args * Fix describe\_regions by changing renamed flags. Also added a test to catch future errors * Merged trunk * merged trunk * merged trunk fixed whitespace in rst * wrap sqlalchemy exceptions in a generic error * Wrap instance at api layer to print the proper error. Use same logic for volumes * Resolved trunk merge conflicts * Updated docs for db sync requirements; merged with Vish's similar doc updates * Change default log formats so that:  \* they include a timestamp (necessary to correlate logs)  \* no longer display version on every line (shorter lines)  \* use [-] instead of [N/A] (shorter lines, less scary-looking)  \* show level before logger name (better human-readability) * Merged with rev597 * syntax error * should be writing some kindof network info to the xenstore now, hopefully * Doc changes for db sync * Fixes issue with describe\_instances requiring an admin context * Passing in an elevated context instead of making the call non-elevated * Added changes to make errors and recovery for volumes more graceful: * Changing service\_get\_all\_by\_host to not require admin context as it is used for describing instances, which any user in a project can do * Eagerly load fixed\_ip.network in instance\_get\_by\_id * merge trunk * Implement provider-level firewall rules in nwfilter * Merged trunk * Refactor run\_tests.sh to allow us to run an extra command after the tests * Merged trunk * Eagerly load instance's fixed\_ip.network attribute * merged trunk changes * minor code cleanup * Refactor run\_tests.sh to allow us to run an extra command after the tests * merged trunk * merge vish's changes (which merged trunk and fixed a pep8 problem) * merged trunkand fixed conflicts and pep error * get\_my\_linklocal raises exception * Completed first pass at converting all localized strings with multiple format substitutions * Allows moving from the Austin-style db to the Bexar-style * move db sync into nosetests package-level fixtures so that the existing nosetests attempt in hudson will pass * merge from upstream and fix small issues * merged to trunk rev572 * Merged trunk * The live\_migration branch ( https://code.launchpad.net/~nttdata/nova/live-migration/+merge/44940 ) was not ready to be merged * merge from upstream to fix conflict * Trunk merge * Merged trunk * Implement support for streaming images from Glance when using the XenAPI virtualization backend, as per the bexar-xenapi-support-for-glance blueprint * Works around the app-armor problem of requiring disks with backing files to be named appropriately by changing the name of our extra disks * merged trunk * Add refresh\_security\_group\_\* methods to nova/virt/fake.py, as FakeConnection is the reference for documentation and method signatures that should be implemented by virt connection drivers * revert live\_migration branch * Merged trunk * Risk of Regression: This patch don’t modify existing functionlities, but I have added some. 1. nova.db.service.sqlalchemy.model.Serivce (adding a column to database) 2. nova.service ( nova-compute needes to insert information defined by 1 above) * Add rules to database, cast refresh message and trickle down to firewall driver * Fixed error message in get\_my\_linklocal * Merged trunk * Merged with trunk revno 572 * Change where paste.deploy factories live and how they are called. They are now in the nova.wsgi.Application/Middleware classes, and call the \_\_init\_\_ method of their class with kwargs of the local configuration of the paste file * Further decouple api routing decisions and move into paste.deploy configuration. This makes paste back the nova-api binary * Merged trunk * The Openstack API requires image metadata to be returned immediately after an image-create call * merge trunk * Merging trunk * Merged trunk * merged trunk rev569 * merged to rev 561 and fixed based on reviewer's comment * Adds a developer interface with direct access to the internal inter-service APIs and a command-line tool based on reflection to interact with them * merge from upstream * pep8 fixes... largely to things from trunk? * merge from upstream * This branch fixes two outstanding bugs in compute. It also fixes a bad method signature in network and removes an unused method in cloud * Re-removes TrialTestCase. It was accidentally added in by some merges and causing issues with running tests individually * merged trial fix again * undo accidental removal of fake\_flags * merged lp:~vishvananda/nova/lp703012 * remove TrialTestCase again and fix merge issues * Merged trunk * Merged with trunk revno 565 * Implements the blueprint for enabling the setting of the root/admin password on an instance * OpenStack Compute (Nova) IPv4/IPv6 dual stack support http://wiki.openstack.org/BexarIpv6supportReadme * Merged to rev.563 * This change introduces support for Sheepdog (distributed block storage system) which is proposed in https://blueprints.launchpad.net/nova/+spec/sheepdog-support * merge from upstream: * Merged with r562 * This modifies libvirt to use CoW images instead of raw images. This is much more efficient and allows us to use the snapshotting capabilities available for qcow2 images. It also changes local storage to be a separate drive instead of a separate partition * remove ">>>MERGE" iin nova/db/sqlalchemy/api.py * merged trunk * Merged with r561 * Merging Trunk * Fixed based on the comments from code review. Merged to trunk rev 561 * Add a new method to firewall drivers to tell them to stop filtering a particular instance. Call it when an instance has been destroyed * merged to trunk rev 561 * Merged trunk * merge trunk rev560 * Fixes related to how EC2 ids are displayed and dealt with * Get reviewed and fixed based on comments. Merged latest version * Merged trunk * Added unit tests for the Diffie-Hellman class. Merged recent trunk changes * merged trunk * Fixed missing \_(). Fixed to follow logging to LOG changes. Fixed merge miss (get\_fixed\_ip was moved away). Update some missing comments * merge from upstream and fix leaks in console tests * add support for database migration * merged with r555 * standardize on hex for ids, allow configurable instance names * Fix test failures on Python 2.7 by eagerly loading the fixed\_ip attribute on instances. No clue why it doesn't affect python 2.6, though * Do joinedload\_all('fixed\_ip.floating\_ips') instead of joinedload('fixed\_ip') * Merging trunk * Merging trunk, small fixes * cleaned up prior merge mess * Merged with r551 * Support IPv6 firewall with IptablesFirewallDriver * Fixed syntax errors * Merged with trunk * Added support of availability zones for compute. models.Service got additional field availability\_zone and was created ZoneScheduler that make decisions based on this field. Also replaced fake 'nova' zone in EC2 cloud api * Eagerly load fixed\_ip property of instances * Had to abandon the other branch (~annegentle/nova/newscript) because the diffs weren't working right for me. This is a fresh branch that should be merged correctly with trunk. Thanks for your patience. :) * Merged with 549 * Change command to get link local address Remove superfluous code * This branch adds web based serial console access. Here is an overview of how it works (for libvirt): * Merged with r548 * Fixed for pep8 Remove temporary debugging * changed exception class * Changing DN creation to do searches for entries * merge trunk, fix conflict * resolve pylint warnings * Read Full Spec for implementation details and notes on how to boot an instance using OS API. http://etherpad.openstack.org/B2RK0q1CYj * Fixed a number of issues with the iptables firewall backend: \* Port specifications for firewalls come back from the data store as integers, but were compared as strings. \* --icmp-type was misspelled as --icmp\_type (underscore vs dash) \* There weren't any unit tests for these issues * merged trunk changes * Merging trunk * Trunk merge and conflcts resolved * Implementation of xs-console blueprint (adds support for console proxies like xvp) * Add support for EBS volumes to the live migration feature. Currently, only AoE is supported * Changed shared\_ip\_group detail routing * Fixes the metadata forwarding to work by default * Adds support to nova-manage to modify projects * merged trunk changes * merge trunk * Bugfix * Adds the requisite infrastructure for automating translation templates import/export to Launchpad * Added babel/gettext build support * re-merged in trunk to correct conflict * Fix describe\_availablity\_zones versobse * merged changes from trunk * Add a new firewall backend for libvirt, based on iptables * Moved get\_my\_ip into flags because that is the only thing it is being used for and use it to set a new flag called my\_ip * fixes Document make configuration by updating nova version mechanism to conform to rev530 update * added myself to authors and fixed typo to follow standard * typo correction * fixed doc make process for new nova version (rev530) machanism * merged from upstream and made applicable changes * Adds a mechanism to programmatically determine the version of Nova. The designated version is defined in nova/version.py. When running python setup.py from a bzr checkout, information about the bzr branch is put into nova/vcsversion.py which is conditionally imported in nova/version.py * s/canonical\_version/canonical\_version\_string/g * merged trunk changes * Fixes issue in trunk with downloading s3 images for instance creation * Wrap logs so we can: \* use a "context" kwarg to track requests all the way through the system \* use a custom formatter so we get the data we want (configurable with flags) \* allow additional formatting for debug statements for easer debugging \* add an AUDIT level, useful for noticing changes to system components \* use named logs instead of the general logger where it makes sesnse * Bug #699912: When failing to connect to a data store, Nova doesn't log which data store it tried to connect to * Bug #699912: When failing to connect to a data store, Nova doesn't log which data store it tried to connect to * Resolved merge differences * Additional cleanup prior to pushing * Merged with trunk * Less code generation * merged changes from trunk * Remove redundant import of nova.context. Use db instance attribute rather than module directly * Merging trunk * Removing some FIXMEs * Reserving image before uploading * merge * another merge with trunk to remedy instance\_id issues * merge * Include date in API action query * Review feedback * This branch implements lock functionality. The lock is stored in the compute worker database. Decorators have been added to the openstack API actions which alter instances in any way * Review feedback * Review feedback * Review feedback * various cleanup and fixes * merged trunk * Include date in action query * Let documentation get version from nova/version.py as well * Track version info, and make available for logging * Merged trunk * pep8 fix * merged trunk changes * commit before merging trunk * Introduces basic support for spawning, rebooting and destroying vms when using Microsoft Hyper-V as the hypervisor. Images need to be in VHD format. Note that although Hyper-V doesn't accept kernel and ramdisk separate from the image, the nova objectstore api still expects an image to have an associated aki and ari. You can use dummy aki and ari images -- the hyper-v driver won't use them or try to download them. Requires Python's WMI module * merged trunk changes * fix some glitches due to someone removing instanc.internal\_id (not that I mind) remove accidental change to nova-combined script * Fixed trunk merge conflicts as spotted by dubs * Fix a bunch of pep8 stuff * This addition to the docs clarifies that it is a requirement for contributors to be listed in the Authors file before their commits can be merged to trunk * merge trunk * another merge from trunk to the latest rev * pulled changes from trunk added console api to openstack api * This branch contains the internal API cleanup branches I had previously proposed, but combined together and with all the UUID key replacement ripped out. This allows multiple REST interfaces (or other tools) to use the internal API directly, rather than having the logic tied up in the ec2 cloud.py file * merged trunk changes * Created a XenAPI plugin that will allow nova code to read/write/delete from xenstore records for a given instance. Added the basic methods for working with xenstore data to the vmops script, as well as plugin support to xenapi\_conn.py * add in xs-console worker and tests * missing \_() * Added xenstore plugin changed * merged changes from trunk * Change all 2010 Copyright statements to 2010-2011 in doc source directory only * merged from trunk * Removed leftover UUID reference * Merged trunk * Merged trunk changes * Some Bug Fix * Merged and fiexed conflicts with r515 * Apply logging changes as a giant patch to work around the cloudpipe delete + add issue in the original patch * Fixes LP688545 * Fixing merge conflicts with new branch * merged in trunk changes * Fixes LP688545 * Uses paste.deploy to make application running configurable. This includes the ability to swap out middlewares, define new endpoints, and generally move away from having code to build wsgi routers and middleware chains into a configurable, extensible method for running wsgi servers * Add burnin support. Services are now by default disabled, but can have instances and volumes run on them using availability\_zone = nova:HOSTNAME. This lets the hardware be put through its paces without being put in the generally available pool of hardware. There is a 'service' subcommand for nova-manage where you can enable, disable, and list statuses of services * pep8 fixes * Several documentation corrections and formatting fixes * merge in trunk * merged latest trunk * merge trunk * merge trunk * merged in trunk and xenstore-plugin changes * Merged trunk * Merged trunk * Merged trunk * Merged trunk * 最新バージョンにマージ。変更点は以下の通り。 Authorsに自分の所属を追加 utils.pyのgenerate\_uidがおかしいのでインスタンスIDがオーバーフローしていたが、 その処理を一時撤廃。後で試験しなおしとすることにした。 * Merged trunk * Make InstanceActions and live diagnostics available through the Admin API * merge trunk * merge trunk * Cleans up the output of run\_tests.sh to look closer to Trial * Merged trunk * This patch is beginning of XenServer snapshots in nova. It adds: * merge recent revision(version of 2010/12/28) Change: 1. Use greenthread instead of defer at nova.virt.libvirt\_conn.live\_migration. 2. Move nova.scheduler.manager.live\_migration to nova.scheduler.driver 3. Move nova.scheduler.manager.has\_enough\_resource to nova.scheduler.driver 4. Any check routine in nova-manage.instance.live\_migration is moved to nova.scheduler.driver.schedule\_live\_migration * Merging trunk * removed db.set\_lock, using update\_instance instead * removed some code i didn't end up using * fixed merge conflict with trunk * PEP8 cleanup * Merged trunk * Added implementation availability\_zones to EC2 API * merge * Changes and error fixes to help ensure basic parity with the Rackspace API. Some features are still missing, such as shared ip groups, and will be added in a later patch set * initial lock functionality commit * Merged with trunk * merge trunk * Defualt services to enabled * Add flag --enable\_new\_services to toggle default state of service when created * merge from trunk * This commit introduces scripts to apply XenServer host networking protections * merge from upstream and fix conflicts * Make action log available through Admin API * Merging trunk * Merged trunk * Added InstanceAction DB functions * merge trunk * I've added suspend along with a few changes to power state as well. I can't imagine suspend will be controversial but I've added a new power state for "suspended" to nova.compute.power\_states which libvirt doesn't use and updated the xenapi power mapping to use it for suspended state. I also updated the mappings in nova.api.openstack.servers to map PAUSED to "error" and SUSPENDED to "suspended". Thoughts there are that we don't currently (openstack API v1.0) use pause, so if somehow an instance were to be paused an error occurred somewhere, or someone did something in error. Either way asking the xenserver host for the status would show "paused". Support for more power states needs to be added to the next version of the openstack API * fix bug #lp694311 * Added stack command-line tool * Cleans up nova.api.openstack.images and fix it to work with cloudservers api. Previously "cloudservers image-list" wouldn't work, now it will. There are mappings in place to handle s3 or glance/local image service. In the future when the local image service is working, we can probably drop the s3 mappings * Converted Volume model and operation to use UUIDs * Merging trunk * Merged trunk * Merging trunk, fixing failed tests * Merged trunk * merge trunk * Fixed after Jay's review. Integrated code from Soren (we now use the same 'magic number' for images without kernel & ramdisk * logs inner exception in nova/utils.py->import\_class * Fix Bug #693963 * merge trunk * Merge * Support IPv6 * Make nova work even when user has LANG or LC\_ALL configured * merged trunk, resolved trivial conflict * fixed merge conflict * Merged again from trunk * fixed a few docstrings, added \_() for gettext * Moves implementation specific Openstack API code from the middleware to the drivers. Also cleans up a few areas and ensures all the API tests are passing again * Merged trunk * Trying to remove twisted dependencies, this gets everything working under nosetests * Merged Monty's branch * Merged trunk and resolved conflicts * merged trunk * merged trunk * Simplifies and improves ldap schema * xenapi iscsi support + unittests * Merged trunk * Added reference in setup.py so that python setup.py test works now * merge lp:nova * merge trunk * merge trunk, fixed unittests, added i18n strings, cleanups etc etc * first merge after i18n * added tests to ensure the easy api works as a backend for Compute API * merge from trunk * Fixes reboot (and rescue) to work even if libvirt doesn't know about the instance and the network doesn't exist * merged trunk * Fixes reboot (and rescue) to work even if libvirt doesn't know about the instance and the network doesn't exist * Adds a flag to use the X-Forwarded-For header to find the ip of the remote server. This is needed when you have multiple api servers with a load balancing proxy in front. It is a flag that defaults to False because if you don't have a sanitizing proxy in front, users could masquerade as other ips by passing in the header manually * Merged trunk * merged trunk * Moves the ip allocation requests to the from the api host into calls to the network host made from the compute host * merged trunk and fixed conflicts * merged trunk * Optimize creation of nwfilter rules so they aren't constantly being recreated * fixed more conflicts * merged trunk again * merge trunk and upgrade to cheetah templating * Optimize nwfilter creation and project filter * Merging trunk * fixed conflicts * WSGI middleware for lockout after failed authentications of ec2 access key * Puts the creation of nova iptables chains into the source code and cleans up rule creation. This makes nova play more nicely with other iptables rules that may be created on the host * Merging trunk * merge trunk * Fixes per-project vpns (cloudpipe) and adds manage commands and support for certificate revocation * merge trunk * merged i8n and fixed conflicts * after trunk merge * Log all XenAPI actions to InstanceActions * Merged trunk * merging trunk * merging trunk * All merged with trunk and let's see if a new merge prop (with no pre-req) works. * merging in trunk * Merged trunk * PEP8 cleanup * Log all XenAPI actions * update db/api.py as well * don't allocate networks when getting vpn info * Added InstanceDiagnostics and InstanceActions DB models * Merged trunk * merge trunk * 1) Merged from trunk 2) 'type' parameter in VMHelper.fetch\_image converted in enum 3) Fixed pep8 errors 4) Passed unit tests * add a few extra joined objects to get instance * Tests pass after cleaning up allocation process * Merging trunk * Add raw disk image support * Adds support for Pause and Unpause of xenserver instances * Integrated changes from Soren (raw-disk-images). Updated authors file. All tests passed * eventlet merge updates * Some tweaks * first revision after eventlet merge. Currently xenapi-unittests are broken, but everything else seems to be running okay * Integrated eventlet\_merge patch * First pass at converting run\_tests.py to nosetests. The network and objctstore tests don't yet work. Also, we need to manually remove the sqlite file between runs * merged in project-vpns to get flag changes * move some flags around * merged trunk * merged trunk, fixed conflicts and tests * This branch removes most of the dependencies on twisted and moves towards the plan described by https://blueprints.launchpad.net/nova/+spec/unified-service-architecture * pep8 fixes * Restore code which was changed for testing reasons to the original state. Kudos to Armando for spotting this * Merged changes from trunk into the branch * Hostテーブルのカラム名を修正 FlatManager, FlatDHCPManagerに対応 * merged with trunk. fixed compute.pause test * Make sure we properly close the bzr WorkingTree in our Authors up-to-datedness unit test * clean up tests and add overriden time method to utils * merged from upstream * basic conversion of xs-pause to eventlet done * brougth clean-up from unittests branch and tests * Lots of PEP-8 work * added volume tests and extended fake to support them * merged upstream * Merged from trunk and fixed merge issues. Also fixed pep8 issues * updates per review * Initial work on i18n. This adds the installation of the nova domain in gettext to all the "endpoints", which are all the bin/\* files and run\_tests.py * pep8 * merge trunk * fixup after merge with trunk * merge with trey tests * Added LiveCD info as well as some changes to reflect consolidation of .conf files * Move security group refresh logic into ComputeAPI * First round of i18n-ifying strings in Nova * Initial i18n commit for endpoints. All endpoints must install gettext, which injects the \_ function into the builtins * Fixed spelling errors in index.rst * merge-a-tat-tat upstream to this branch * \* pylint fixes \* code clean-up \* first cut for xenapi unit tests * merged changes from sandy's branch * formatting and naming cleanup * get service unittests runnning again * Converted the instance table to use a uuid instead of a auto\_increment ID and a random internal\_id. I had to use a String(32) column with hex and not a String(16) with bytes because SQLAlchemy doesn't like non-unicode strings going in for String types. We could try another type, but I didn't want a primary\_key on blob types * merge with trey * Make XenServer VM diagnostics available through nova.virt.xenapi * Merged trunk * merging sandy's branch * raw instances can now be launched in xenapi (only as hvm at the moment) * merge with trunk to pull in admin-api branch * Flag to define which operations are exposed in the OpenStack API, disabling all others * Fixed Authors conflict and re-merged with trunk * intermediate commit to checkpoint progress * some pylint caught changes to compute * merge conflict * merged upstream changes * Merged trunk * merged updates to trunk * merge trunk * Finished cleaning up the openstack servers API, it no longer touches the database directly. Also cleaned up similar things in ec2 API and refactored a couple methods in nova.compute.api to accommodate this work * Pushed terminate instance and network manager/topic methods into network.compute.api * Merged trunk * Moved the reboot/rescue methods into nova.compute.api * merged with trunk. All clear! * コメントを除去 README.live\_migration.txtのレビュー結果を修正 * Added livecd instructions plus fixed references to .conf files * Added a script to use OpenDJ as an LDAP server instead of OpenLDAP. Also modified nova.sh to add an USE\_OPENDJ option, that will be checked when USE\_LDAP is set * It looks like Soren fixed the author file, can I hit the commit button? * merge trunk * Addresses bug 677475 by changing the DB column for internal\_id in the instances table to be unsigned * rev439ベースにライブマイグレーションの機能をマージ このバージョンはEBSなし、CPUフラグのチェックなし * Add iptables based security groups implementation * merge with lp:~armando-migliaccio/nova/xenapi-refactoring * merge trunk * Decreased the maximum value for instance-id generation from uint32 to int32 to avoid truncation when being entered into the instance table. Reverted fix to make internal\_id column a uint * Finished cleaning up the openstack servers API, it no longer touches the database directly. Also cleaned up similar things in ec2 API and refactored a couple methods in nova.compute.api to accomodate this work * Merged reboot-rescue into network-manager * Merged trunk * Consolidated the start instance logic in the two API classes into a single method. This also cleans up a number of small discrepencies between the two * Merged trunk and resolved conflicts * merge lp:~armando-migliaccio/nova/refactoring * merge trunk * Guarantee that the OpenStack API's Server-related responses will always contain a "name" value. And get rid of a redundant field in models.py * Oops, internal\_id isn't available until after a save. This code saves twice; if I moved it into the DB layer we could do it in one save. However, we're moving to one sqlite db per compute worker, so I'd rather have two saves in order to keep the logic in the right layer * Add include\_package\_data=True to setup.py * Broke parts of compute manager out into compute.api to separate what gets run on the API side vs the worker side * Moving the openldap schema out of nova.sh into it's own files, and adding sun (opends/opendj/sun directory server/fedora ds) schema files * brought latest changes from trunk * merged Justin Santa Barbara's raw-disk-image back into the latest trunk * merged trunk * Add a templating mechanism in the flag parsing * brought the xenapi refactoring in plus trunk changes * Add include\_package\_data=True to setup.py * A few more changes: \* Fixed up some flags \* Put in an updated nova.sh \* Broke out metadata forwarding so it will work in flatdhcp mode \* Added descriptive docstrings explaining the networking modes in more detail * small conflict resolution * Fix typo "nova.util" -> "nova.utils" * Fix typo "nova.util" -> "nova.utils" * Added a .mailmap that maps addresses in bzr to people's real, preferred e-mail addresses. (I made a few guesses along the way, feel free to adjust according to what is actually the preferred e-mail) * merged trunk, added recent nova.sh * add vpn ping and optimize vpn list * Address pep8 complaints * * fixed pep8 violations * added test for invalid handles * Adds images (only links one in), start for a nova-manage man file, and also documents all nova-manage commands. Can we merge it in even though the man page build isn't working? * Improved Pylint Score * Soren updated setup.py so that the man page builds. Will continue working on man pages for nova-compute and nova-network * Overwrite build\_sphinx, making it run once for each of the html and man builders * Update version to 2011.1 as that is the version we expect to release next * Fixes eventlet race condition in cloud tests * fix greenthread race conditions in trunk and floating ip leakage * Testing man page build through conf.py * Improved Pylint Score * merged with trunk * Update version to 2011.1 as that is the version we expect to release next * Adds nova-debug to tools directory, for debugging of instances that lose networking * Ryan\_Lane's code to handle /etc/network not existing when we try to inject /etc/network/interfaces into an image * Changed from fine-grained operation control to binary admin on/off setting * Lots of documentation and docstring updates * The docs are just going to be wrong for now. I'll file a bug upstream * Change how wsgified doc wrapping happens to fix test * pep8 * merge with trunk * merge in anne's changes * merge to remote * unify env syntax * create SPHINX\_DEBUG env var. Setting this will disable aggressive autodoc generation. Also provide some sample for P syntax * fix conf file from earlier merge * anne's changes to the networking documentation * Updated Networking doc * Added a .mailmap that maps addresses in bzr to people's real, preferred e-mail addresses. (I made a few guesses along the way, feel free to adjust according to what is actually the preferred e-mail) * merge in anne's changes * home page tweaks * Updated CSS and community.rst file * modifications and additions based on doc sprint * incorporate some feedback from todd and anne * merge in trunk * working on novadoc structure * Use the autodoc tools in the setup.py build\_sphinx toolchain * Fix include paths so setup.py build\_sphinx works again * back out stacked merge * Switch to module-per-file for the module index * Build autodocs for all our libraries * Per-project vpns, certificates, and revocation * Fix docstrings for wsigfied methods * small tweaks before context switch * use include to grab todd's quickstart * add in custom todo, and custom css * Format TODO items for sphinx todo extension * additions to home page * Change directory structure for great justice! * Getting Started Guide * have "contents" look the same as other headings * pep8 whitespace and line length fixes * merged trunk * prettier theme * Change socket type in nova.utils.get\_my\_ip() to SOCK\_DGRAM. This way, we don't actually have to set up a connection. Also, change the destination host to an IP (chose one of Google's DNS's at random) rather than a hostname, so we avoid doing a DNS lookup * Change socket type in nova.utils.get\_my\_ip() to SOCK\_DGRAM. This way, we don't actually have to set up a connection. Also, change the destination host to an IP (chose one of Google's DNS's at random) rather than a hostname, so we avoid doing a DNS lookup * ISCSI Volume support * merged * merge * merged trunk * API endpoint documentation * basics to get proxied ajaxterm working with virsh * merged trunk, just in case * Update database docs * Add support for google analytics to only the hudson-produced docs * Changes to conf.py * Update database page a bit * Pep-257 cleanups * merge trunkdoc * Moves db writes into compute manager class. Cleans up sqlalchemy model/api to remove redundant calls for updating what is really a dict * Fix wiki link * merged and fixed conflicts * updates to auth, concepts, and network, fix of docstring * cleanup rrd doc generation * New structure for documentation * Fixes PEP8 violations from the last few merges * More PEP8 fixes that were introduced in the last couple commits * Fixes service unit tests after tornado excision * renamed target\_id to iscsi\_target * merged gundlach's excision * Delete BaseTestCase and with it the last reference to tornado * Removes some cruft from sqlalchemy/models.py like unused imports and the unused str\_id method * Adds rescue and unrescue commands * actually remove the conditional * fix tests by removing missed reference to prefix and unnecessary conditional in generate\_uid * add nova-debug to setup.py * Remove the last vestigial bits of tornado code still in use * Exceptions in the OpenStack API will be converted to Faults as they should be, rather than barfing a stack trace to the user * Duplicate the two trivial escaping functions remaining from tornado's code and remove the dependency * ISCSI Volume support * merge lp:nova * merged trunk and fixed conflicts/changes * part way through porting the codebase off of twisted * Another pep8 cleanup branch for nova/tests, should be merged after lp:~eday/nova/pep8-fixes-other. After this, the pep8 violation count is 0! * Another pep8 cleanup branch for nova/api, should be merged after lp:~eday/nova/pep8-fixes * PEP8 cleanup in nova/db. There should be no functional changes here, just style changes to get violations down * PEP8 and pylint cleanup. There should be no functional changes here, just style changes to get violations down * Moves db writes into compute manager class. Cleans up sqlalchemy model/api to remove redundant calls for updating what is really a dict * Cleanup of doc for dependencies (redis optional, remove tornado, etc). Please check for accuracy * Made updates based on review comments * Updated documentation * Update version set in setup.py to 2010.1 in preparation for Austin release * Also update version in docs * Update version to 2010.1 in preparation for Austin release * \* Fills out the Parallax/Glance API calls for update/create/delete and adds unit tests for them. \* Modifies the ImageController and GlanceImageService/LocalImageService calls to use index and detail routes to comply perfectly with the RS/OpenStack API * This branch converts incoming data to the api into the proper type * Fix the --help flag for printing help on twistd-based services * Make Redis completely optional: * trivial style change * prevent leakage of FLAGS changes across tests * This branch modifies the fixes all of the deprecation warnings about empty context. It does this by adding the following fixes/features \* promotes api/context.py to context.py because it is used by the whole system \* adds more information to the context object \* passes the context through rpc \* adds a helper method for promoting to admin context (elevate()) \* modifies most checks to use context.project\_id instead of context.project.id to avoid trips to the database * Merged with trunk, fixed broken stuff * Fixes a few concurrency issues with creating volumes and instances. Most importantly it adds retries to a number of the volume shell commands and it adds a unique constraint on export\_devices and a safe create so that there aren't multiple copies of export devices in the database * merged trunk * merged concurrency * merged trunk * cleaned up most of the issues * elevate in proper places, fix a couple of typos * merged trunk * Fixes bug 660115 * Fix several problems keeping AuthMiddleware from functioning in the OpenStack API * Xen support * Adds flat networking + dhcpserver mode * This patch removes the ugly network\_index that is used by VlanManager and turns network itself into a pool. It adds support for creating the networks through an api command: nova-manage network create # creates all of the networks defined by flags or nova-manage network create 5 # create the first five networks * merged upstream * cleanup leftover addresses * merged trunk * merged trunk * merged trunk * merged trunk * Revert the conversion to 64-bit ints stored in a PickleType column, because PickleType is incompatible with having a unique constraint * Revert 64 bit storage and use 32 bit again. I didn't notice that we verify that randomly created uids don't already exist in the DB, so the chance of collision isn't really an issue until we get to tens of thousands of machines. Even then we should only expect a few retries before finding a free ID * This patch adds support for EC2 security groups using libvirt's nwfilter mechanism, which in turn uses iptables and ebtables on the individual compute nodes. This has a number of benefits: \* Inter-VM network traffic can take the fastest route through the network without our having to worry about getting it through a central firewall. \* Not relying on a central firewall also removes a potential SPOF. \* The filtering load is distributed, offering great scalability * Change internal\_id from a 32 bit int to a 64 bit int * 32 bit internal\_ids become 64 bit. Since there is no 64 bit native type in SqlAlchemy, we use PickleType which uses the Binary SqlAlchemy type under the hood * Catch exception.NotFound when getting project VPN data * Adds stubs and tests for GlanceImageService and LocalImageService. Adds basic plumbing for ParallaxClient and TellerClient and hooks that into the GlanceImageService * Cleanup around the rackspace API for the ec2 to internal\_id transition * A little more clean up * Replace model.Instance.ec2\_id with an integer internal\_id so that both APIs can represent the ID to external users * Fix clause comparing id to internal\_id * merged trunk and fixed tests * merge from gundlach ec2 conversion * Fix broken unit tests * A shiny, new Auth driver backed by SQLAlchemy. Read it and weep. I did * Revert r312 * Accidentally renamed volume related stuff * Bug #653534: NameError on session\_get in sqlalchemy.api.service\_update * Fixes to address the following issues: * Bug #654025: nova-manage project zip and nova-manage vpn list broken by change in DB semantics when networks are missing * Bug #653534: NameError on session\_get in sqlalchemy.api.service\_update * Adjust db api usage according to recent refactoring * Refactor sqlalchemy api to perform contextual authorization * Fix the deprecation warnings for passing no context * Address a few comments from Todd * Merged trunk * Locked down fixed ips and improved network tests * merged remove-network-index * Fixed flat network manager with network index gone * merged trunk * First attempt at a uuid generator -- but we've lost a 'topic' input so i don't know what that did * Method cleanup and fixing the servers tests * merged trunk, removed extra quotas * Adds support for periodic\_tasks on manager that are regularly called by the service and recovers fixed\_ips that didn't get disassociated properly * Replace database instance 'ec2\_id' with 'internal\_id' throughout the nova.db package. internal\_id is now an integer -- we need to figure out how to make this a bigint or something * merged trunk * Includes changes for creating instances via the Rackspace API. Utilizes much of the existing EC2 functionality to power the Rackspace side of things, at least for now * Add a DB backend for auth manager * Bug #652103: NameError in exception handler in sqlalchemy API layer * Bug #652103: NameError in exception handler in sqlalchemy API layer * Cleaned up db/api.py * Refactored APIRequestContext * Simplified authorization with decorators" " * Wired up context auth for keypairs * Completed quota context auth * Finished context auth for network * Finished instance context auth * Finished instance context auth * Made network tests pass again * Wired up context auth for services * Progress on volumes Fixed foreign keys to respect deleted flag * Support the pagination interface in RS API -- the &offset and &limit parameters are now recognized * Update from trunk to handle one-line merge conflict * Support fault notation in error messages in the RS API * fix the primary and secondary join * autocreate the models and use security\_groups * Began wiring up context authorization * removed a few extra items * merged with soren's branch * fix loading to ignore deleted items * Add user-editable name & notes/description to volumes, instances, and images * merged trunk * fix join and misnamed method * fix eagerload to be joins that filter by deleted == False * \* Create an AuthManager#update\_user method to change keys and admin status. \* Refactor the auth\_unittest to not care about test order \* Expose the update\_user method via nova-manage * Updates the fix-iptables branch with a number of bugfixes * Makes sure that multiple copies of nova-network don't create multiple copies of the same NetworkIndex * Fix a few errors in api calls related to mistyped database methods for floating\_ips: specifically describe addresses and and associate address * Merged Termie's branch that starts tornado removal and fixed rpc test cases for twisted. Nothing is testing the Eventlet version of rpc.call though yet * Adds a disabled flag to service model and check for it when scheduling instances and volumes * Adds bpython support to nova-manage shell, because it is super sexy * Added random ec2 style id's for volumes and instances * merged and removed duplicated methods * fixed merge conflicts * Implementation of the Rackspace servers API controller * Added checks for uniqueness for ec2 id * add disabled column to services and check for it in scheduler * merged network-lease-fix * merged floating-ips * move default group creation to api * Implemented random instance and volume strings for ec2 api * merge from trunk * get rid of network indexes and make networks into a pool * merged trunk * return a value if possible from export\_device\_create\_safe * merged floating-ip-by-project * merged network-lease-fix * merged trunk * Stop trying to install nova-api-new (it's gone). Install nova-scheduler * db api call to get instances by user and user checking in each of the server actions * Add db api methods for retrieving the networks for which a host is the designated network host * Merged Termie's branch and fixed rpc test cases for tesited. Nothing is testing the Eventlet version of rpc.call though yet * Install nova-scheduler * nova-api-new is no more. Don't attempt to install it * Put EC2 API -> eventlet back into trunk, fixing the bits that I missed when I put it into trunk on 9/21 * Implementation of Rackspace token based authentication for the Openstack API * Some more refactoring and another unit test * Refactored the auth branch based on review feedback * Merged gundlach's branch * merged trunk * merge from trunk * typo in instance\_get * typo in instance\_get * merged trunk and fixed errors * cleaned up exception handling for fixed\_ip\_get * merged trunk * Delete nova.endpoint module, which used Tornado to serve up the Amazon EC2 API. Replace it with nova.api.ec2 module, which serves up the same API via a WSGI app in Eventlet. Convert relevant unit tests from Twisted to eventlet * merged trunk * merged trunk * Some more refactoring and another unit test * Implements quotas with overrides for instances, volumes, and floating ips * Moves keypairs out of ldap and into the common datastore * allows api servers to have a list of regions, allowing multi-cluster support if you have a shared image store and user database * merged trunk * merged trunk * Refactored the auth branch based on review feedback * Removes second copy of ProcessExecutionError that creeped in during a bad merge * Adds timing fields to instances and volumes to track launch times and schedule times * Adds timing fields to instances and volumes to track launch times and schedule times * add in support for ajaxterm console access * Better error message on the failure of a spawned process, and it's a ProcessExecutionException irrespective of how the process is run (twisted or not) * Proposing merge to get feedback on orm refactoring. I am very interested in feedback to all of these changes * Clean up use of ORM to remove the need for scoped\_session * Filters all get defined when running an instance * multiple network controllers will not create duplicate indexes * removed second copy of ProcessExecutionError * simplified query * missed a space * set leased = 0 as well on disassociate update * speed up the query and make sure allocated is false * workaround for mysql select in update * Periodic callback for services and managers. Added code to automatically disassociate stale ip addresses * merged trunk * Integrity error is in a different exc file * allow multiple volumes to run ensure\_blades without creating duplicates * merged instance time and added better concurrency * make fixed\_ip\_get\_by\_address return the instance as well so we don't run into concurrency issues where it is disassociated in between * speed up generation of dhcp\_hosts and don't run into None errors if instance is deleted * don't allocate the same floating ip multiple times * merged trunk * implement floating\_ip\_get\_all\_by\_project and renamed db methods that get more then one to get\_all\_by instead of get\_by * merged scheduler * tests for volumes work * update query and test * merged quotas * use gigabytes and cores * Security Group API layer cleanup * merged trunk * remerged scheduler * merged trunk * merged trunk * merged trunk * merged trunk * fixed old key reference and made keypair name constistent -> key\_pair * fixed tests, added a flag for updating dhcp on disassociate * simplified network instance association * fix network association issue * Finished security group / project refactor * delete keypairs when a user is deleted * moved keypairs to db using the same interface * Refactored to security group api to support projects * merged orm and put instance in scheduling state * First pass of nwfilter based security group implementation. It is not where it is supposed to be and it does not actually do anything yet * Create and delete security groups works. Adding and revoking rules works. DescribeSecurityGroups returns the groups and rules. So, the API seems to be done. Yay * merged describe\_speed * added scheduled\_at to instances and volumes * merged orm * merged orm * merged orm * make the db creates return refs instead of ids * merged orm, added database methods for getting volume and ip data for projects * database support for quotas * merged support code from orm branch * added floating ip commands and launched\_at terminated\_at, deleted\_at for objects * merged orm * remove extraneous get\_host calls that were requiring an extra db trip * Authorize and Revoke access now works * list command for floating ips * merged describe speed * floating ip commands * speed up describe by loading fixed and floating ips * AuthorizeSecurityGroupIngress now works * Alright, first hole poked all the way through. We can now create security groups and read them back * don't fail in db if context isn't a dict, since we're still using a class based context in the api * logging for backend is now info instead of error * merged orm * merged orm * consistent naming for instance\_set\_state * Tests turn things into inlineCallbacks * Remove tornado-related code from almost everything * make timestamps for instances and volumes, includes additions to get deleted objects from db using deleted flag * updated to the new orm code * changed a few unused context to \_context * a few formatting fixes and moved exception * fixed a few bugs in volume handling * Last of cleanup, including removing fake\_storage flage * review db code cleanup * more fixes to session handling * few typos in updates * clean up of session handling * merged orm * fix floating\_ip to follow standard create pattern * merged orm\_deux * Lots of fixes to make the nova commands work properly and make datamodel work with mysql properly * removed extra equals * removed extra file and updated sql note * more scheduler tests * merged trunk * merged orm branch * merged trunk and cleaned up test * renamed daemon to service and update db on create and destroy * merged orm branch * scheduler + unittests * removed underscores from used context * This improves the changelog generated as part of "setup.py sdist". If you look at it now, it says that Tarmac has done everything and every little commit is listed. With this patch, it only logs the "top-most" commit and credits the author rather than the committer * Moved API tests into a sub-folder of the tests/ and added a stubbed-out test declarations to mirror existing API tickets * merged orm branch * pylint cleanup of db classes * rename node\_name to host * merged trunk * Better log formatter for Nova. It's just like gnuchangelog, but logs the author rather than the committer * Adjust setup.py to match nova-rsapi -> nova-api-new rename * Fix up setup.py to match nova-rsapi -> nova-api-new rename * more cleanup and pylint fixes * pep8 cleanup * merged trunk, fixed a couple errors * run and terminate work * undo change to get\_my\_ip * all tests pass again * merged devin's sqlalchemy changes * Making tests pass * pylint fixes for /nova/virt/connection.py * pylint fixes for nova/objectstore/handler.py * ip addresses work now * Add Flavors controller supporting * Resolve conflicts and merge trunk * instance runs * tests pass * Making tests pass * Refactored orm to support atomic actions * moved network code into business layer * split volume into service/manager/driver * moved models.py * removed the last few references to models.py * fixed volume unit tests * get to look like trunk * network tests pass again * Fixes issue with the same ip being assigned to multiple instances * merged trunk and fixed tests * move network\_type flag so it is accesible in data layer * more data layer breakouts, lots of fixes to cloud.py * merged jesse * Initial support for Rackspace API /image requests. They will eventually be backed by Glance * work towards volumes using db layer * merge vish * merge vish * merge vish * more cleanup * getting run/terminate/describe to work * run instances works * removed old imports and moved flags * merge and fixes to creates to all return id * bunch more fixes * moving network code and fixing run\_instances * jesse's run\_instances changes * fix daemons and move network code * Rework virt.xenapi's concurrency model. There were many places where we were inadvertently blocking the reactor thread. The reworking puts all calls to XenAPI on background threads, so that they won't block the reactor thread * merged trunk and fixed merge errors * Refactored network model access into data abstraction layer * Moves auth.manager to the data layer * Add db abstraction and unittets for service.py * Alphabetize the methods in the db layer * Better error message on subprocess spawn fail, and it's a ProcessExecutionException irrespective of how the process is run * Check exit codes when spawning processes by default Also pass --fail to curl so that it sets exit code when download fails * move volume code into datalayer and cleanup * Added unit tests for WSGI helpers and base WSGI API * merged termies abstractions * Move deferredToThread into utils, as suggested by termie * Data abstraction for compute service * Merged with trunk * Merged with trunk * Merged trunk * Since pylint=0.19 is our version, force everyone to use the disable-msg syntax * Changed our minds: keep pylint equal to Ubuntu Lucid version, and use disable-msg throughout * Newest pylint supports 'disable=', not 'disable-msg=' * merged trunk * merged refresh from sleepsonthefloor * See description of change... what's the difference between that message and this message again? * Fixes quite a few style issues across the entire nova codebase bringing it much closer to the guide described in HACKING * merge from trunk * merged trunk and fixed conflicts * Added documentation for the nova.virt connection interface, a note about the need to chmod the objectstore script, and a reference for the XenAPI module * rather comprehensive style fixes * Add new libvirt\_type option "uml" for user-mode-linux.. This switches the libvirt URI to uml:///system and uses a different template for the libvirt xml * merge in latedt from vish * Catches and logs exceptions for rpc calls and raises a RemoteError exception on the caller side * Removes requirement of internet connectivity to run api server * merged trunk * merged fix-hostname and fixed conflict * Improves pep8 compliance and pylint score in network code * refactor to have base helper class with shared session and engine * got run\_tests.py to run (with many failed tests) * Make WSGI routing support routing to WSGI apps or to controller+action * Merged with trunk * Fix exception in get\_info * Merged with trunk * Merged with trunk * Implement VIF creation in the xenapi module * merged trunk * 2 changes in doing PEP8 & Pylint cleaning: \* adding pep8 and pylint to the PIP requirements files for Tools \* light cleaning work (mostly formatting) on nova/endpoints/cloud.py * More changes to volume to fix concurrency issues. Also testing updates * merged trunk, fixed an error with releasing ip * pylint fixes for /nova/test.py * Fixes pylint issues in /nova/server.py * importing merges from hudson branch * This branch builds off of Todd and Michael's API branches to rework the Rackspace API endpoint and WSGI layers * Fix up variable names instead of disabling pylint naming rule. Makes variables able to be a single letter in pylintrc * Disables warning about TODO in code comments in pylintrc * More pylint/pep8 cleanup, this time in bin/\* files * pylint fixes for /nova/test.py * Pull trunk merge through lp:~ewanmellor/nova/add-contains * Pull trunk merge through lp:~ewanmellor/nova/xapi-plugin * Merged with trunk again * Merged with trunk * Greater compliance with pep8/pylint style checks * Merged trunk * merged with trunk * Merged Todd and Michael's changes * Make network its own worker! This separates the network logic from the api server, allowing us to have multiple network controllers. There a lot of stuff in networking that is ugly and should be modified with the datamodel changes. I've attempted not to mess with those things too much to keep the changeset small(ha!) * merged trunk * merged trunk * Fix deprecation warning in AuthManager. \_\_new\_\_ isn't allowed to take args * Get IP doesn't fail of you not connected to the intetnet * Merged with trunk * Added --fail argument to curl invocations, so that HTTP request fails get surfaced as non-zero exit codes * Merged with trunk * Merged with trunk * Fixed assertion "Someone released me too many times: too many tokens!" * Merged with trunk to resolve merge conflicts * oops retry and add extra exception check * Added ChangeLog generation * Implemented admin api for rbac * Adds initial support for XenAPI (not yet finished) * More merges from trunk. Not everything came over the first time * Allow driver specification in AuthManager creation * Fixed pep8 issues in setup.py - thanks redbo * Releaed 0.9.0, now on 0.9.1 * Added ChangeLog generation * allow driver to be passed in to auth manager instead of depending solely on flag * Merged trunk * Create a model for storing session tokens * bzr merge lp:nova/trunk * Tagged 0.9.0 and bumped the version to 0.9.1 * Got the tree set for debian packaging * Added the gitignore files back in for the folks who are still on the git * Updated setup.py file to install stuff on a python setup.py install command * Removed gitignore files * Merged trunk * Bump version to 0.9.0 * Makes the compute and volume daemon workers use a common base class called Service. Adds a NetworkService in preparation for splitting out networking code. General cleanup and standardizarion of naming * Bump version to 0.9.0. Change author to "OpenStack". Change author\_email to nova@lists.launchpad.net. Change url to http://www.openstack.org/. Change description to "cloud computing fabric controller" * merged trunk * Makes the objectstore require authorization, checks it properly, and makes nova-compute provide it when fetching images * Refactor of auth code * Adds support scripts for installing deps into a virtualenv * Move virtualenv installation out of the makefile * Expiry awareness for SessionToken * merged trunk * merged trunk * Updated doc layout to the Sphinx two-dir layout * Changes nova-volume to use twisted * Updated sphinx layout to a two-dir layout like swift. Updated a doc string to get rid of a Sphinx warning * Merged with trunk, since a lot of useful things have gone in there recently * renamed xxxnode to xxservice * Check exit codes when spawning processes by default * Merged trunk, fixed extra references to fake\_users * merge with twisted-volume * Locally administered mac addresses have the second least significant bit of the most significant byte set. If this byte is set then udev on ubuntu doesn't set persistent net rules * use a locally administered mac address so it isn't saved by udev * Merged trunk. Fixed new references to UserManager * Fixes to dhcp lease code to use a flagfile * merged trunk * Replace tornado objectstore with twisted web * merged in trunk and fixed import merge errors * Add build\_sphinx support * Added a config file to let setup.py drive building the sphinx docs * merge with singleton pool * reorder imports spacing * remove import of vendor since we have PPA now * remove vendor * update copyrights * fix merge errors * datetime import typo * added missing isotime method from utils * Fixed the os.environ patch (bogus) * Fixes as per Vish review (whitespace, import statements) * Got dhcpleasor working, with test ENV for testing, and rpc.cast for real world * Capture signals from dnsmasq and use them to update network state * Removed trailing whitespace from header * Updated licenses * removed all references to keeper * Fixes based on code review 27001 * Admin API + Worker Tracking * Removed trailing whitespace from header * Updated licenses * fix fakeldap so it can use redis keeper * Refactored Instance to get rid of \_s bits, and fixed some bugs in state management * Flush redis db in setup and teardown of tests * Update documentation * make get\_my\_ip return 127.0.0.1 for testing * whitespace fixes for nova/utils.py * missed the gitignore * initial commit ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/LICENSE0000664000175000017500000002363700000000000016655 0ustar00zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.292778 openstack_placement-13.0.0/PKG-INFO0000644000175000017500000000760600000000000016741 0ustar00zuulzuul00000000000000Metadata-Version: 2.1 Name: openstack-placement Version: 13.0.0 Summary: Resource provider inventory usage and allocation service Home-page: https://docs.openstack.org/placement/latest/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org Project-URL: Bug Tracker, https://bugs.launchpad.net/placement Project-URL: Documentation, https://docs.openstack.org/placement/latest/ Project-URL: API Reference, https://docs.openstack.org/api-ref/placement/ Project-URL: Source Code, https://opendev.org/openstack/placement Project-URL: Release Notes, https://docs.openstack.org/releasenotes/placement/ Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Requires-Python: >=3.9 License-File: LICENSE Requires-Dist: pbr>=3.1.1 Requires-Dist: SQLAlchemy>=1.4.0 Requires-Dist: keystonemiddleware>=4.18.0 Requires-Dist: Routes>=2.3.1 Requires-Dist: WebOb>=1.8.2 Requires-Dist: jsonschema>=3.2.0 Requires-Dist: requests>=2.25.0 Requires-Dist: oslo.concurrency>=3.26.0 Requires-Dist: oslo.config>=6.7.0 Requires-Dist: oslo.context>=2.22.0 Requires-Dist: oslo.log>=4.3.0 Requires-Dist: oslo.serialization>=2.25.0 Requires-Dist: oslo.utils>=4.5.0 Requires-Dist: oslo.db>=8.6.0 Requires-Dist: oslo.policy>=4.4.0 Requires-Dist: oslo.middleware>=3.31.0 Requires-Dist: oslo.upgradecheck>=1.3.0 Requires-Dist: os-resource-classes>=1.1.0 Requires-Dist: os-traits>=3.3.0 Requires-Dist: microversion-parse>=0.2.1 If you are viewing this README on GitHub, please be aware that placement development happens on `OpenStack git `_ and `OpenStack gerrit `_. =================== OpenStack Placement =================== .. image:: https://governance.openstack.org/tc/badges/placement.svg :target: https://governance.openstack.org/tc/reference/tags/index.html OpenStack Placement provides an HTTP service for managing, selecting, and claiming providers of classes of inventory representing available resources in a cloud. API --- To learn how to use Placement's API, consult the documentation available online at: - `Placement API Reference `__ For more information on OpenStack APIs, SDKs and CLIs in general, refer to: - `OpenStack for App Developers `__ - `Development resources for OpenStack clouds `__ Operators --------- To learn how to deploy and configure OpenStack Placement, consult the documentation available online at: - `OpenStack Placement `__ In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. If you obtained the software from a 3rd party operating system vendor, it is often wise to use their own bug tracker for reporting problems. In all other cases use the master OpenStack bug tracker, available at: - `Bug Tracker `__ - `File new Bug `__ Developers ---------- For information on how to contribute to Placement, please see the contents of CONTRIBUTING.rst. Further developer focused documentation is available at: - `Official Placement Documentation `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/README.rst0000664000175000017500000000353300000000000017330 0ustar00zuulzuul00000000000000If you are viewing this README on GitHub, please be aware that placement development happens on `OpenStack git `_ and `OpenStack gerrit `_. =================== OpenStack Placement =================== .. image:: https://governance.openstack.org/tc/badges/placement.svg :target: https://governance.openstack.org/tc/reference/tags/index.html OpenStack Placement provides an HTTP service for managing, selecting, and claiming providers of classes of inventory representing available resources in a cloud. API --- To learn how to use Placement's API, consult the documentation available online at: - `Placement API Reference `__ For more information on OpenStack APIs, SDKs and CLIs in general, refer to: - `OpenStack for App Developers `__ - `Development resources for OpenStack clouds `__ Operators --------- To learn how to deploy and configure OpenStack Placement, consult the documentation available online at: - `OpenStack Placement `__ In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. If you obtained the software from a 3rd party operating system vendor, it is often wise to use their own bug tracker for reporting problems. In all other cases use the master OpenStack bug tracker, available at: - `Bug Tracker `__ - `File new Bug `__ Developers ---------- For information on how to contribute to Placement, please see the contents of CONTRIBUTING.rst. Further developer focused documentation is available at: - `Official Placement Documentation `__ ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2087777 openstack_placement-13.0.0/api-ref/0000775000175000017500000000000000000000000017160 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2207778 openstack_placement-13.0.0/api-ref/ext/0000775000175000017500000000000000000000000017760 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/ext/__init__.py0000664000175000017500000000000000000000000022057 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/ext/validator.py0000664000175000017500000000436400000000000022326 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test to see if docs exists for routes and methods in the placement API.""" import os from placement import handler # A humane ordering of HTTP methods for sorted output. ORDERED_METHODS = ['GET', 'POST', 'PUT', 'PATCH', 'DELETE'] DEPRECATED_METHODS = [('POST', '/resource_providers/{uuid}/inventories')] def _header_line(map_entry): method, route = map_entry line = '.. rest_method:: %s %s' % (method, route) return line def inspect_doc(app): """Load up doc_files and see if any routes are missing. The routes are defined in handler.ROUTE_DECLARATIONS. """ doc_files = [os.path.join(app.srcdir, file) for file in os.listdir(app.srcdir) if file.endswith(".inc")] routes = [] for route in sorted(handler.ROUTE_DECLARATIONS, key=len): # Skip over the '' route. if route: for method in ORDERED_METHODS: if method in handler.ROUTE_DECLARATIONS[route]: routes.append((method, route)) header_lines = [] for map_entry in routes: if map_entry not in DEPRECATED_METHODS: header_lines.append(_header_line(map_entry)) content_lines = [] for doc_file in doc_files: with open(doc_file) as doc_fh: content_lines.extend(doc_fh.read().splitlines()) missing_lines = [] for line in header_lines: if line not in content_lines: missing_lines.append(line) if missing_lines: msg = ['Documentation likely missing for the following routes:', ''] for line in missing_lines: msg.append(line) raise ValueError('\n'.join(msg)) def setup(app): app.connect('builder-inited', inspect_doc) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2247777 openstack_placement-13.0.0/api-ref/source/0000775000175000017500000000000000000000000020460 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/aggregates.inc0000664000175000017500000001224300000000000023266 0ustar00zuulzuul00000000000000============================ Resource provider aggregates ============================ Each resource provider can be associated with one or more other resource providers in groups called aggregates. API calls in this section are used to list and update the aggregates that are associated with one resource provider. Provider aggregates are used for modeling relationships among providers. Examples may include: * A shared storage pool providing DISK_GB resources to compute node providers that provide VCPU and MEMORY_MB resources. * Affinity/anti-affinity relationships such as physical location, power failure domains, or other reliability/availability constructs. * Groupings of compute host providers *corresponding to* Nova host aggregates or availability zones. .. note:: Placement aggregates are *not* the same as Nova host aggregates and should not be considered equivalent. The primary differences between Nova's host aggregates and placement aggregates are the following: * In Nova, a host aggregate associates a *nova-compute service* with other nova-compute services. Placement aggregates are not specific to a nova-compute service and are, in fact, not compute-specific at all. A resource provider in the Placement API is generic, and placement aggregates are simply groups of generic resource providers. This is an important difference especially for Ironic, which when used with Nova, has many Ironic baremetal nodes attached to a single nova-compute service. In the Placement API, each Ironic baremetal node is its own resource provider and can therefore be associated to other Ironic baremetal nodes via a placement aggregate association. * In Nova, a host aggregate may have *metadata* key/value pairs attached to it. All nova-compute services associated with a Nova host aggregate share the same metadata. Placement aggregates have no such metadata because placement aggregates *only* represent the grouping of resource providers. In the Placement API, resource providers are individually decorated with *traits* that provide qualitative information about the resource provider. * In Nova, a host aggregate dictates the *availability zone* within which one or more nova-compute services reside. While placement aggregates may be used to *model* availability zones, they have no inherent concept thereof. .. note:: Aggregates API requests are available starting from version 1.1. List resource provider aggregates ================================= .. rest_method:: GET /resource_providers/{uuid}/aggregates Return a list of aggregates associated with the resource provider identified by `{uuid}`. Normal Response Codes: 200 Error response codes: itemNotFound(404) if the provider does not exist. (If the provider has no aggregates, the result is 200 with an empty aggregate list.) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path Response (microversions 1.1 - 1.18) ----------------------------------- .. rest_parameters:: parameters.yaml - aggregates: aggregates Response Example (microversions 1.1 - 1.18) ------------------------------------------- .. literalinclude:: ./samples/aggregates/get-aggregates.json :language: javascript Response (microversions 1.19 - ) -------------------------------- .. rest_parameters:: parameters.yaml - aggregates: aggregates - resource_provider_generation: resource_provider_generation Response Example (microversions 1.19 - ) ---------------------------------------- .. literalinclude:: ./samples/aggregates/get-aggregates-1.19.json :language: javascript Update resource provider aggregates =================================== Associate a list of aggregates with the resource provider identified by `{uuid}`. .. rest_method:: PUT /resource_providers/{uuid}/aggregates Normal Response Codes: 200 Error response codes: badRequest(400), itemNotFound(404), conflict(409) Request (microversion 1.1 - 1.18) --------------------------------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path - aggregates: aggregates Request example (microversion 1.1 - 1.18) ----------------------------------------- .. literalinclude:: ./samples/aggregates/update-aggregates-request.json :language: javascript Request (microversion 1.19 - ) --------------------------------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path - aggregates: aggregates - resource_provider_generation: resource_provider_generation Request example (microversion 1.19 - ) ----------------------------------------- .. literalinclude:: ./samples/aggregates/update-aggregates-request-1.19.json :language: javascript Response (microversion 1.1 - ) ------------------------------ .. rest_parameters:: parameters.yaml - aggregates: aggregates - resource_provider_generation: resource_provider_generation_v1_19 Response Example (microversion 1.1 - 1.18) ------------------------------------------ .. literalinclude:: ./samples/aggregates/update-aggregates.json :language: javascript Response Example (microversion 1.19 - ) ------------------------------------------ .. literalinclude:: ./samples/aggregates/update-aggregates-1.19.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/allocation_candidates.inc0000664000175000017500000000716500000000000025470 0ustar00zuulzuul00000000000000===================== Allocation candidates ===================== .. note:: Allocation candidates API requests are available starting from version 1.10. List allocation candidates ========================== Returns a dictionary representing a collection of allocation requests and resource provider summaries. Each allocation request has information to form a ``PUT /allocations/{consumer_uuid}`` request to claim resources against a related set of resource providers. Additional parameters might be required, see `Update allocations`_. As several allocation requests are available it's necessary to select one. To make a decision, resource provider summaries are provided with the inventory/capacity information. For example, this information is used by nova-scheduler's FilterScheduler to make decisions about on which compute host to build a server. You can also find additional case studies of the request parameters in the `Modeling with Provider Trees`_ document. .. rest_method:: GET /allocation_candidates Normal Response Codes: 200 Error response codes: badRequest(400) Request ------- .. rest_parameters:: parameters.yaml - resources: resources_query_ac - required: required_traits_unnumbered - member_of: allocation_candidates_member_of - in_tree: allocation_candidates_in_tree - resourcesN: resources_query_granular - requiredN: required_traits_granular - member_ofN: allocation_candidates_member_of_granular - in_treeN: allocation_candidates_in_tree_granular - group_policy: allocation_candidates_group_policy - limit: allocation_candidates_limit - root_required: allocation_candidates_root_required - same_subtree: allocation_candidates_same_subtree Response (microversions 1.12 - ) -------------------------------- .. rest_parameters:: parameters.yaml - allocation_requests: allocation_requests - provider_summaries: provider_summaries_1_12 - allocations: allocations_by_resource_provider - resources: resources - capacity: capacity - used: used - traits: traits_1_17 - parent_provider_uuid: resource_provider_parent_provider_uuid_response_1_29 - root_provider_uuid: resource_provider_root_provider_uuid_1_29 - mappings: mappings Response Example (microversions 1.34 - ) ---------------------------------------- .. literalinclude:: ./samples/allocation_candidates/get-allocation_candidates-1.34.json :language: javascript Response Example (microversions 1.29 - 1.33) -------------------------------------------- .. literalinclude:: ./samples/allocation_candidates/get-allocation_candidates-1.29.json :language: javascript Response Example (microversions 1.17 - 1.28) -------------------------------------------- .. literalinclude:: ./samples/allocation_candidates/get-allocation_candidates-1.17.json :language: javascript Response Example (microversions 1.12 - 1.16) -------------------------------------------- .. literalinclude:: ./samples/allocation_candidates/get-allocation_candidates-1.12.json :language: javascript Response (microversions 1.10 - 1.11) ------------------------------------ .. rest_parameters:: parameters.yaml - allocation_requests: allocation_requests - provider_summaries: provider_summaries - allocations: allocations_array - resource_provider: resource_provider_object - uuid: resource_provider_uuid - resources: resources - capacity: capacity - used: used Response Example (microversions 1.10 - 1.11) -------------------------------------------- .. literalinclude:: ./samples/allocation_candidates/get-allocation_candidates.json :language: javascript .. _`Modeling with Provider Trees`: https://docs.openstack.org/placement/latest/usage/provider-tree.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/allocations.inc0000664000175000017500000001534700000000000023475 0ustar00zuulzuul00000000000000=========== Allocations =========== Allocations are records representing resources that have been assigned and used by some consumer of that resource. They indicate the amount of a particular resource that has been allocated to a given consumer of that resource from a particular resource provider. Manage allocations ================== Create, update or delete allocations for multiple consumers in a single request. This allows a client to atomically set or swap allocations for multiple consumers as may be required during a migration or move type operation. The allocations for an individual consumer uuid mentioned in the request can be removed by setting the `allocations` to an empty object (see the example below). **Available as of microversion 1.13.** .. rest_method:: POST /allocations Normal response codes: 204 Error response codes: badRequest(400), conflict(409) * `409 Conflict` if there is no available inventory in any of the resource providers for any specified resource classes. * `409 Conflict` with `error code `_ ``placement.concurrent_update`` if inventories are updated by another request while attempting the operation. See :ref:`generations`. * `409 Conflict` with `error code `_ ``placement.concurrent_update`` at microversion 1.28 or higher if allocations for a specified consumer have been created, updated, or removed by another request while attempting the operation. See :ref:`generations`. Request ------- .. rest_parameters:: parameters.yaml - consumer_uuid: consumer_uuid_body - consumer_generation: consumer_generation_min - consumer_type: consumer_type - project_id: project_id_body - user_id: user_id_body - allocations: allocations_dict_empty - generation: resource_provider_generation_optional - resources: resources - mappings: mappings_in_allocations Request example (microversions 1.38 - ) --------------------------------------- .. literalinclude:: ./samples/allocations/manage-allocations-request-1.38.json :language: javascript Request example (microversions 1.28 - 1.36) ------------------------------------------- .. literalinclude:: ./samples/allocations/manage-allocations-request-1.28.json :language: javascript Request example (microversions 1.13 - 1.27) ------------------------------------------- .. literalinclude:: ./samples/allocations/manage-allocations-request.json :language: javascript Response -------- No body content is returned after a successful request List allocations ================ List all allocation records for the consumer identified by `{consumer_uuid}` on all the resource providers it is consuming. .. note:: When listing allocations for a consumer uuid that has no allocations a dict with an empty value is returned ``{"allocations": {}}``. .. rest_method:: GET /allocations/{consumer_uuid} Normal Response Codes: 200 Request ------- .. rest_parameters:: parameters.yaml - consumer_uuid: consumer_uuid Response -------- .. rest_parameters:: parameters.yaml - allocations: allocations_by_resource_provider - generation: resource_provider_generation - resources: resources - consumer_generation: consumer_generation_get - consumer_type: consumer_type - project_id: project_id_body_1_12 - user_id: user_id_body_1_12 Response Example (1.38 - ) -------------------------- .. literalinclude:: ./samples/allocations/get-allocations-1.38.json :language: javascript Response Example (1.28 - 1.36) ------------------------------ .. literalinclude:: ./samples/allocations/get-allocations-1.28.json :language: javascript Response Example (1.12 - 1.27) ------------------------------ .. literalinclude:: ./samples/allocations/get-allocations.json :language: javascript Update allocations ================== Create or update one or more allocation records representing the consumption of one or more classes of resources from one or more resource providers by the consumer identified by `{consumer_uuid}`. If allocations already exist for this consumer, they are replaced. .. rest_method:: PUT /allocations/{consumer_uuid} Normal Response Codes: 204 Error response codes: badRequest(400), itemNotFound(404), conflict(409) * `409 Conflict` if there is no available inventory in any of the resource providers for any specified resource classes. * `409 Conflict` with `error code `_ ``placement.concurrent_update`` if inventories are updated by another request while attempting the operation. See :ref:`generations`. * `409 Conflict` with `error code `_ ``placement.concurrent_update`` at microversion 1.28 or higher if allocations for the specified consumer have been created, updated, or removed by another request while attempting the operation. See :ref:`generations`. Request (microversions 1.12 - ) ------------------------------- .. rest_parameters:: parameters.yaml - consumer_uuid: consumer_uuid - allocations: allocations_dict - resources: resources - consumer_generation: consumer_generation_min - consumer_type: consumer_type - project_id: project_id_body - user_id: user_id_body - generation: resource_provider_generation_optional - mappings: mappings_in_allocations Request example (microversions 1.38 - ) --------------------------------------- .. literalinclude:: ./samples/allocations/update-allocations-request-1.38.json :language: javascript Request example (microversions 1.28 - 1.36) ------------------------------------------- .. literalinclude:: ./samples/allocations/update-allocations-request-1.28.json :language: javascript Request example (microversions 1.12 - 1.27) ------------------------------------------- .. literalinclude:: ./samples/allocations/update-allocations-request-1.12.json :language: javascript Request (microversions 1.0 - 1.11) ---------------------------------- .. rest_parameters:: parameters.yaml - consumer_uuid: consumer_uuid - allocations: allocations_array - resources: resources - resource_provider: resource_provider_object - uuid: resource_provider_uuid - project_id: project_id_body_1_8 - user_id: user_id_body_1_8 Request example (microversions 1.0 - 1.11) ------------------------------------------ .. literalinclude:: ./samples/allocations/update-allocations-request.json :language: javascript Response -------- No body content is returned on a successful PUT. Delete allocations ================== Delete all allocation records for the consumer identified by `{consumer_uuid}` on all resource providers it is consuming. .. rest_method:: DELETE /allocations/{consumer_uuid} Normal Response Codes: 204 Error response codes: itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - consumer_uuid: consumer_uuid Response -------- No body content is returned on a successful DELETE. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/conf.py0000664000175000017500000000526700000000000021771 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # placement-api-ref documentation build configuration file, created by # sphinx-quickstart on Sat May 1 15:17:47 2010. # # This file is execfile()d with the current directory set to # its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys sys.path.insert(0, os.path.abspath('../')) extensions = [ 'openstackdocstheme', 'os_api_ref', 'ext.validator', ] # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = 'Placement API Reference' copyright = '2010-present, OpenStack Foundation' # openstackdocstheme options openstackdocs_repo_name = 'openstack/placement' openstackdocs_auto_name = False openstackdocs_use_storyboard = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { "sidebar_mode": "toc", } # -- Options for LaTeX output ------------------------------------------------- # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'Placement.tex', 'OpenStack Placement API Documentation', 'OpenStack Foundation', 'manual'), ] # -- Options for openstackdocstheme ------------------------------------------- openstackdocs_projects = [ 'placement', ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/errors.inc0000664000175000017500000000626200000000000022475 0ustar00zuulzuul00000000000000====== Errors ====== When there is an error interacting with the placement API, the response will include a few different signals of what went wrong, include the status header and information in the response body. The structure of the ``JSON`` body of an error response is defined by the OpenStack errors_ guideline. **HTTP Status Code** The ``Status`` header of the response will include a code, defined by :rfc:`7231#section-6` that gives a general overview of the problem. This value also shows up in a ``status`` attribute in the body of the response. **Detail Message** A textual description of the error condition, in a ``detail`` attribute. The value is usually the message associated with whatever exception happened within the service. **Error Code** When the microversion is ``>=1.23`` responses will also include a ``code`` attribute in the ``JSON`` body. These are documented below. Where a response does not use a specific code ``placement.undefined_code`` is present. .. note:: In some cases, for example when keystone is being used and no authentication information is provided in a request (causing a ``401`` response), the structure of the error response will not match the above because the error is produced by code other than the placement service. .. _`error_codes`: Error Codes =========== The defined errors are: .. list-table:: :header-rows: 1 * - Code - Meaning * - ``placement.undefined_code`` - The default code used when a specific code has not been defined or is not required. * - ``placement.inventory.inuse`` - An attempt has been made to remove or shrink inventory that has capacity in use. * - ``placement.concurrent_update`` - Another operation has concurrently made a request that involves one or more of the same resources referenced in this request, changing state. The current state should be retrieved to determine if the desired operation should be retried. * - ``placement.duplicate_name`` - A resource of this type already exists with the same name, and duplicate names are not allowed. * - ``placement.resource_provider.inuse`` - An attempt was made to remove a resource provider, but there are allocations against its inventory. * - ``placement.resource_provider.cannot_delete_parent`` - An attempt was made to remove a resource provider, but it has one or more child providers. They must be removed first in order to remove this provider. * - ``placement.resource_provider.not_found`` - A resource provider mentioned in an operation involving multiple resource providers, such as :ref:`reshaper`, does not exist. * - ``placement.query.duplicate_key`` - A request included multiple instances of a query parameter that may only be specified once. * - ``placement.query.bad_value`` - A value in a request conformed to the schema, but failed semantic validation. * - ``placement.query.missing_value`` - A required query parameter is not present in a request. .. _errors: https://specs.openstack.org/openstack/api-wg/guidelines/errors.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/generations.inc0000664000175000017500000000411700000000000023474 0ustar00zuulzuul00000000000000.. _generations: ========================================== Resource Provider and Consumer Generations ========================================== Placement handles concurrent requests against the same entity by maintaining a **generation** for resource providers and consumers. The generation is an opaque value that is updated every time its entity is successfully changed on the server. At appropriate microversions, the generation is returned in responses involving resource providers and/or consumers (allocations), and must be included in requests which make changes to those entities. The server checks to make sure the generation specified in the request matches the internal value. A mismatch indicates that a different request successfully updated that entity in the interim, thereby changing its generation. This will result in an HTTP 409 Conflict response with `error code `_ ``placement.concurrent_update``. Depending on the usage scenario, an appropriate reaction to such an error may be to re-``GET`` the entity in question, re-evaluate and update as appropriate, and resubmit the request with the new payload. The following pseudocode is a simplistic example of how one might ensure that a trait is set on a resource provider. .. note:: This is not production code. Aside from not being valid syntax for any particular programming language, it deliberately glosses over details and good programming practices such as error checking, retry limits, etc. It is purely for illustrative purposes. :: function _is_concurrent_update(resp) { if(resp.status_code != 409) return False return(resp.json()["errors"][0]["code"] == "placement.concurrent_update") } function ensure_trait_on_provider(provider_uuid, trait) { do { path = "/resource_providers/" + provider_uuid + "/traits" get_resp = placement.GET(path) payload = get_resp.json() if(trait in payload["traits"]) return payload["traits"].append(trait) put_resp = placement.PUT(path, payload) } while _is_concurrent_update(put_resp) } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/index.rst0000664000175000017500000000206700000000000022326 0ustar00zuulzuul00000000000000:tocdepth: 2 =============== Placement API =============== This is a reference for the OpenStack Placement API. To learn more about OpenStack Placement API concepts, please refer to the :placement-doc:`Placement Introduction <>`. The Placement API uses JSON for data exchange. As such, the ``Content-Type`` header for APIs sending data payloads in the request body (i.e. ``PUT`` and ``POST``) must be set to ``application/json`` unless otherwise noted. .. rest_expand_all:: .. include:: request-ids.inc .. include:: errors.inc .. include:: generations.inc .. include:: root.inc .. include:: resource_providers.inc .. include:: resource_provider.inc .. include:: resource_classes.inc .. include:: resource_class.inc .. include:: inventories.inc .. include:: inventory.inc .. include:: aggregates.inc .. include:: traits.inc .. include:: resource_provider_traits.inc .. include:: allocations.inc .. include:: resource_provider_allocations.inc .. include:: usages.inc .. include:: resource_provider_usages.inc .. include:: allocation_candidates.inc .. include:: reshaper.inc ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/inventories.inc0000664000175000017500000000643000000000000023523 0ustar00zuulzuul00000000000000============================= Resource provider inventories ============================= Each resource provider has inventory records for one or more classes of resources. An inventory record contains information about the total and reserved amounts of the resource and any consumption constraints for that resource against the provider. List resource provider inventories ================================== .. rest_method:: GET /resource_providers/{uuid}/inventories Normal Response Codes: 200 Error response codes: itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path Response -------- .. rest_parameters:: parameters.yaml - inventories: inventories - resource_provider_generation: resource_provider_generation - allocation_ratio: allocation_ratio - max_unit: max_unit - min_unit: min_unit - reserved: reserved - step_size: step_size - total: total Response Example ---------------- .. literalinclude:: ./samples/inventories/get-inventories.json :language: javascript Update resource provider inventories ==================================== Replaces the set of inventory records for the resource provider identified by `{uuid}`. .. rest_method:: PUT /resource_providers/{uuid}/inventories Normal Response Codes: 200 Error response codes: badRequest(400), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path - resource_provider_generation: resource_provider_generation - inventories: inventories - total: total - allocation_ratio: allocation_ratio_opt - max_unit: max_unit_opt - min_unit: min_unit_opt - reserved: reserved_opt - step_size: step_size_opt Request example --------------- .. literalinclude:: ./samples/inventories/update-inventories-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - resource_provider_generation: resource_provider_generation - inventories: inventories - allocation_ratio: allocation_ratio - max_unit: max_unit - min_unit: min_unit - reserved: reserved - step_size: step_size - total: total Response Example ---------------- .. literalinclude:: ./samples/inventories/update-inventories.json :language: javascript Delete resource provider inventories ==================================== Deletes all inventory records for the resource provider identified by `{uuid}`. **Troubleshooting** The request returns an HTTP 409 when there are allocations against the provider or if the provider's inventory is updated by another thread while attempting the operation. .. note:: Method is available starting from version 1.5. .. rest_method:: DELETE /resource_providers/{uuid}/inventories Normal Response Codes: 204 Error response codes: itemNotFound(404), conflict(409) .. note:: Since this request does not accept the resource provider generation, it is not safe to use when multiple threads are managing inventories for a single provider. In such situations, use the ``PUT /resource_providers/{uuid}/inventories`` API with an empty ``inventories`` dict. Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path Response -------- No body content is returned on a successful DELETE. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/inventory.inc0000664000175000017500000000614600000000000023217 0ustar00zuulzuul00000000000000=========================== Resource provider inventory =========================== See `Resource provider inventories`_ for a description. This group of API calls works with a single inventory identified by ``resource_class``. One inventory can be listed, created, updated and deleted per each call. Show resource provider inventory ================================ .. rest_method:: GET /resource_providers/{uuid}/inventories/{resource_class} Normal Response Codes: 200 Error response codes: itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path - resource_class: resource_class_path Response -------- .. rest_parameters:: parameters.yaml - resource_provider_generation: resource_provider_generation - allocation_ratio: allocation_ratio - max_unit: max_unit - min_unit: min_unit - reserved: reserved - step_size: step_size - total: total Response Example ---------------- .. literalinclude:: ./samples/inventories/get-inventory.json :language: javascript Update resource provider inventory ================================== Replace the inventory record of the `{resource_class}` for the resource provider identified by `{uuid}`. .. rest_method:: PUT /resource_providers/{uuid}/inventories/{resource_class} Normal Response Codes: 200 Error response codes: badRequest(400), itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path - resource_class: resource_class_path - resource_provider_generation: resource_provider_generation - total: total - allocation_ratio: allocation_ratio_opt - max_unit: max_unit_opt - min_unit: min_unit_opt - reserved: reserved_opt - step_size: step_size_opt Request example --------------- .. literalinclude:: ./samples/inventories/update-inventory-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - resource_provider_generation: resource_provider_generation - allocation_ratio: allocation_ratio - max_unit: max_unit - min_unit: min_unit - reserved: reserved - step_size: step_size - total: total Response Example ---------------- .. literalinclude:: ./samples/inventories/update-inventory.json :language: javascript Delete resource provider inventory ================================== Delete the inventory record of the `{resource_class}` for the resource provider identified by `{uuid}`. See `Troubleshooting`_ section in ``Delete resource provider inventories`` for a description. In addition, the request returns HTTP 409 when there are allocations for the specified resource provider and resource class. .. _Troubleshooting: ?expanded=delete-resource-provider-inventories-detail#delete-resource-provider-inventories .. rest_method:: DELETE /resource_providers/{uuid}/inventories/{resource_class} Normal Response Codes: 204 Error response codes: itemNotFound(404), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path - resource_class: resource_class_path Response -------- No body content is returned on a successful DELETE. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/parameters.yaml0000664000175000017500000007520600000000000023521 0ustar00zuulzuul00000000000000# variables in header location: description: | The location URL of the resource created, HTTP header "Location: " will be returned. in: header required: true type: string # variables in path consumer_uuid: &consumer_uuid type: string in: path required: true description: > The uuid of a consumer. resource_class_path: &resource_class_path type: string in: path required: true description: > The name of one resource class. resource_class_path_custom: &resource_class_path_custom type: string in: path required: true description: > The name of one resource class. The name must start with the prefix ``CUSTOM_``. If not, the request returns a ``Bad Request (400)`` response code. resource_provider_uuid_path: &resource_provider_uuid_path type: string in: path required: true description: > The uuid of a resource provider. trait_name: type: string in: path required: true description: > The name of a trait. # variables in query allocation_candidates_group_policy: type: string in: query required: false min_version: 1.25 description: > When more than one ``resourcesN`` query parameter is supplied, ``group_policy`` is required to indicate how the groups should interact. With ``group_policy=none``, separate groupings - with or without a suffix - may or may not be satisfied by the same provider. With ``group_policy=isolate``, suffixed groups are guaranteed to be satisfied by *different* providers - though there may still be overlap with the suffixless group. allocation_candidates_in_tree: &allocation_candidates_in_tree type: string in: query required: false description: > A string representing a resource provider uuid. When supplied, it will filter the returned allocation candidates to only those resource providers that are in the same tree with the given resource provider. min_version: 1.31 allocation_candidates_in_tree_granular: <<: *allocation_candidates_in_tree description: > A string representing a resource provider uuid. The parameter key is ``in_treeN``, where ``N`` represents a suffix corresponding with a ``resourcesN`` parameter. When supplied, it will filter the returned allocation candidates for that suffixed group to only those resource providers that are in the same tree with the given resource provider. **In microversions 1.25 - 1.32** the suffix is a number. **Starting from microversion 1.33** the suffix is a string that may be 1-64 characters long and consist of numbers, ``a-z``, ``A-Z``, ``-``, and ``_``. allocation_candidates_limit: type: integer in: query required: false min_version: 1.16 description: > A positive integer used to limit the maximum number of allocation candidates returned in the response. allocation_candidates_member_of: type: string in: query required: false description: > A string representing an aggregate uuid; or the prefix ``in:`` followed by a comma-separated list of strings representing aggregate uuids. The resource providers in the allocation request in the response must directly or via the root provider be associated with the aggregate or aggregates identified by uuid:: member_of=5e08ea53-c4c6-448e-9334-ac4953de3cfa member_of=in:42896e0d-205d-4fe3-bd1e-100924931787,5e08ea53-c4c6-448e-9334-ac4953de3cfa **Starting from microversion 1.24** specifying multiple ``member_of`` query string parameters is possible. Multiple ``member_of`` parameters will result in filtering providers that are directly or via root provider associated with aggregates listed in all of the ``member_of`` query string values. For example, to get the providers that are associated with aggregate A as well as associated with any of aggregates B or C, the user could issue the following query:: member_of=AGGA_UUID&member_of=in:AGGB_UUID,AGGC_UUID **Starting from microversion 1.32** specifying forbidden aggregates is supported in the ``member_of`` query string parameter. Forbidden aggregates are prefixed with a ``!``. This negative expression can also be used in multiple ``member_of`` parameters:: member_of=AGGA_UUID&member_of=!AGGB_UUID would translate logically to "Candidate resource providers must be in AGGA and *not* in AGGB." We do NOT support ``!`` on the values within ``in:``, but we support ``!in:``. Both of the following two example queries return candidate resource providers that are NOT in AGGA, AGGB, or AGGC:: member_of=!in:AGGA_UUID,AGGB_UUID,AGGC_UUID member_of=!AGGA_UUID&member_of=!AGGB_UUID&member_of=!AGGC_UUID We do not check if the same aggregate uuid is in both positive and negative expression to return 400 BadRequest. We still return 200 for such cases. For example:: member_of=AGGA_UUID&member_of=!AGGA_UUID would return empty ``allocation_requests`` and ``provider_summaries``, while:: member_of=in:AGGA_UUID,AGGB_UUID&member_of=!AGGA_UUID would return resource providers that are NOT in AGGA but in AGGB. min_version: 1.21 allocation_candidates_member_of_granular: type: string in: query required: false description: > A string representing an aggregate uuid; or the prefix ``in:`` followed by a comma-separated list of strings representing aggregate uuids. The returned resource providers must directly be associated with at least one of the aggregates identified by uuid. **Starting from microversion 1.32** specifying forbidden aggregates is supported. Forbidden aggregates are expressed with a ``!`` prefix; or the prefix ``!in:`` followed by a comma-separated list of strings representing aggregate uuids. The returned resource providers must not directly be associated with any of the aggregates identified by uuid. The parameter key is ``member_ofN``, where ``N`` represents a suffix corresponding with a ``resourcesN`` parameter. The value format is the same as for the (not granular) ``member_of`` parameter; but all of the resources and traits specified in a granular grouping will always be satisfied by the same resource provider. **In microversions 1.25 - 1.32** the suffix is a number. **Starting from microversion 1.33** the suffix is a string that may be 1-64 characters long and consist of numbers, ``a-z``, ``A-Z``, ``-``, and ``_``. Separate groupings - with or without a suffix - may or may not be satisfied by the same provider, depending on the value of the ``group_policy`` parameter. It is an error to specify a ``member_ofN`` parameter without a corresponding ``resourcesN`` parameter with the same suffix. min_version: 1.25 allocation_candidates_root_required: type: string in: query required: false min_version: 1.35 description: | A comma-separated list of trait requirements that the root provider of the (non-sharing) tree must satisfy:: root_required=COMPUTE_SUPPORTS_MULTI_ATTACH,!CUSTOM_WINDOWS_LICENSED Allocation requests in the response will be limited to those whose (non-sharing) tree's root provider satisfies the specified trait requirements. Traits which are forbidden (must **not** be present on the root provider) are expressed by prefixing the trait with a ``!``. allocation_candidates_same_subtree: type: string in: query required: false min_version: 1.36 description: | A comma-separated list of request group suffix strings ($S). Each must exactly match a suffix on a granular group somewhere else in the request. Importantly, the identified request groups need not have a resources[$S]. If this is provided, at least one of the resource providers satisfying a specified request group must be an ancestor of the rest. The ``same_subtree`` query parameter can be repeated and each repeat group is treated independently. consumer_type_req: type: string in: query required: false min_version: 1.38 description: | A string that consists of numbers, ``A-Z``, and ``_`` describing the consumer type by which to filter usage results. For example, to retrieve only usage information for 'INSTANCE' type consumers a parameter of ``consumer_type=INSTANCE`` should be provided. The ``all`` query parameter may be specified to group all results under one key, ``all``. The ``unknown`` query parameter may be specified to group all results under one key, ``unknown``. project_id: &project_id type: string in: query required: true description: > The uuid of a project. required_traits_granular: type: string in: query required: false description: | A comma-separated list of traits that a provider must have, or (if prefixed with a ``!``) **not** have:: required42=HW_CPU_X86_AVX,HW_CPU_X86_SSE,!HW_CPU_X86_AVX2 The parameter key is ``requiredN``, where ``N`` represents a suffix corresponding with a ``resourcesN`` parameter. The value format is the same as for the (not granular) ``required`` parameter; but all of the resources and traits specified in a suffixed grouping will always be satisfied by the same resource provider. Separate groupings - with or without a suffix - may or may not be satisfied by the same provider, depending on the value of the ``group_policy`` parameter. **In microversions 1.25 - 1.32** the suffix is a number. **Starting from microversion 1.33** the suffix is a string that may be 1-64 characters long and consist of numbers, ``a-z``, ``A-Z``, ``-``, and ``_``. It is an error to specify a ``requiredN`` parameter without a corresponding ``resourcesN`` parameter with the same suffix. **Starting from microversion 1.39** the granular ``requiredN`` query parameter gained support for the ``in:`` syntax as well as the repetition of the parameter. So:: requiredN=in:T3,T4&requiredN=T1,!T2 is supported and it means T1 and not T2 and (T3 or T4). min_version: 1.25 required_traits_unnumbered: type: string in: query required: false min_version: 1.17 description: | A comma-separated list of traits that a provider must have:: required=HW_CPU_X86_AVX,HW_CPU_X86_SSE Allocation requests in the response will be for resource providers that have capacity for all requested resources and the set of those resource providers will *collectively* contain all of the required traits. These traits may be satisfied by any provider in the same non-sharing tree or associated via aggregate as far as that provider also contributes resource to the request. **Starting from microversion 1.22** traits which are forbidden from any resource provider contributing resources to the request may be expressed by prefixing a trait with a ``!``. **Starting from microversion 1.39** the ``required`` query parameter can be repeated. The trait lists from the repeated parameters are ANDed together. So:: required=T1,!T2&required=T3 means T1 and not T2 and T3. Also **starting from microversion 1.39** the ``required`` parameter supports the syntax:: required=in:T1,T2,T3 which means T1 or T2 or T3. Mixing forbidden traits into an ``in:`` prefixed value is not supported and rejected. But mixing a normal trait list and an ``in:`` prefixed trait list in two query params within the same request is supported. So:: required=in:T3,T4&required=T1,!T2 is supported and it means T1 and not T2 and (T3 or T4). resource_provider_member_of: type: string in: query required: false description: > A string representing an aggregate uuid; or the prefix ``in:`` followed by a comma-separated list of strings representing aggregate uuids. The returned resource providers must directly be associated with at least one of the aggregates identified by uuid:: member_of=5e08ea53-c4c6-448e-9334-ac4953de3cfa member_of=in:42896e0d-205d-4fe3-bd1e-100924931787,5e08ea53-c4c6-448e-9334-ac4953de3cfa **Starting from microversion 1.24** specifying multiple ``member_of`` query string parameters is possible. Multiple ``member_of`` parameters will result in filtering providers that are associated with aggregates listed in all of the ``member_of`` query string values. For example, to get the providers that are associated with aggregate A as well as associated with any of aggregates B or C, the user could issue the following query:: member_of=AGGA_UUID&member_of=in:AGGB_UUID,AGGC_UUID **Starting from microversion 1.32** specifying forbidden aggregates is supported in the ``member_of`` query string parameter. Forbidden aggregates are prefixed with a ``!``. This negative expression can also be used in multiple ``member_of`` parameters:: member_of=AGGA_UUID&member_of=!AGGB_UUID would translate logically to "Candidate resource providers must be in AGGA and *not* in AGGB." We do NOT support ``!`` on the values within ``in:``, but we support ``!in:``. Both of the following two example queries return candidate resource providers that are NOT in AGGA, AGGB, or AGGC:: member_of=!in:AGGA_UUID,AGGB_UUID,AGGC_UUID member_of=!AGGA_UUID&member_of=!AGGB_UUID&member_of=!AGGC_UUID We do not check if the same aggregate uuid is in both positive and negative expression to return 400 BadRequest. We still return 200 for such cases. For example:: member_of=AGGA_UUID&member_of=!AGGA_UUID would return an empty list for ``resource_providers``, while:: member_of=in:AGGA_UUID,AGGB_UUID&member_of=!AGGA_UUID would return resource providers that are NOT in AGGA but in AGGB. min_version: 1.3 resource_provider_name_query: type: string in: query required: false description: > The name of a resource provider to filter the list. resource_provider_required_query: type: string in: query required: false description: | A comma-delimited list of string trait names. Results will be filtered to include only resource providers having all the specified traits. **Starting from microversion 1.22** traits which are forbidden from any resource provider may be expressed by prefixing a trait with a ``!``. **Starting from microversion 1.39** the ``required`` query parameter can be repeated. The trait lists from the repeated parameters are ANDed together. So:: required=T1,!T2&required=T3 means T1 and not T2 and T3. Also **starting from microversion 1.39** the ``required`` parameter supports the syntax:: required=in:T1,T2,T3 which means T1 or T2 or T3. Mixing forbidden traits into an ``in:`` prefixed value is not supported and rejected. But mixing normal trait list and ``in:`` trait list in two query params within the same request is supported. So:: required=in:T3,T4&required=T1,!T2 is supported and it means T1 and not T2 and (T3 or T4). min_version: 1.18 resource_provider_tree_query: type: string in: query required: false description: > A UUID of a resource provider. The returned resource providers will be in the same "provider tree" as the specified provider. min_version: 1.14 resource_provider_uuid_query: <<: *resource_provider_uuid_path in: query required: false resources_query_1_4: type: string in: query required: false description: | A comma-separated list of strings indicating an amount of resource of a specified class that a provider must have the capacity and availability to serve:: resources=VCPU:4,DISK_GB:64,MEMORY_MB:2048 Note that the amount must be an integer greater than 0. min_version: 1.4 resources_query_ac: type: string in: query required: false description: | A comma-separated list of strings indicating an amount of resource of a specified class that providers in each allocation request must *collectively* have the capacity and availability to serve:: resources=VCPU:4,DISK_GB:64,MEMORY_MB:2048 These resources may be satisfied by any provider in the same non-sharing tree or associated via aggregate. resources_query_granular: type: string in: query required: false description: | A comma-separated list of strings indicating an amount of resource of a specified class that a provider must have the capacity to serve:: resources42=VCPU:4,DISK_GB:64,MEMORY_MB:2048 The parameter key is ``resourcesN``, where ``N`` represents a unique suffix. The value format is the same as for the (not granular) ``resources`` parameter, but the resources specified in a ``resourcesN`` parameter will always be satisfied by a single provider. **In microversions 1.25 - 1.32** the suffix is a number. **Starting from microversion 1.33** the suffix is a string that may be 1-64 characters long and consist of numbers, ``a-z``, ``A-Z``, ``-``, and ``_``. Separate groupings - with or without a suffix - may or may not be satisfied by the same provider depending on the value of the ``group_policy`` parameter. min_version: 1.25 trait_associated: type: string in: query required: false description: > If this parameter has a true value, the returned traits will be those that are associated with at least one resource provider. Available values for the parameter are true and false. trait_name_query: type: string in: query required: false description: | A string to filter traits. The following options are available: `startswith` operator filters the traits whose name begins with a specific prefix, e.g. name=startswith:CUSTOM, `in` operator filters the traits whose name is in the specified list, e.g. name=in:HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_INVALID_FEATURE. user_id: &user_id type: string in: query required: false description: > The uuid of a user. # variables in body aggregates: type: array in: body required: true description: > A list of aggregate uuids. Previously nonexistent aggregates are created automatically. allocation_ratio: &allocation_ratio type: float in: body required: true description: | It is used in determining whether consumption of the resource of the provider can exceed physical constraints. For example, for a vCPU resource with:: allocation_ratio = 16.0 total = 8 Overall capacity is equal to 128 vCPUs. allocation_ratio_opt: <<: *allocation_ratio required: false allocation_requests: type: array in: body required: true description: > A list of objects that contain a serialized HTTP body that a client may subsequently use in a call to PUT /allocations/{consumer_uuid} to claim resources against a related set of resource providers. allocations_array: type: array in: body required: true description: > A list of dictionaries. allocations_by_resource_provider: type: object in: body required: true description: > A dictionary of allocations keyed by resource provider uuid. allocations_dict: &allocations_dict type: object in: body required: true description: > A dictionary of resource allocations keyed by resource provider uuid. allocations_dict_empty: <<: *allocations_dict description: > A dictionary of resource allocations keyed by resource provider uuid. If this is an empty object, allocations for this consumer will be removed. min_version: null capacity: type: integer in: body required: true description: > The amount of the resource that the provider can accommodate. consumer_count: type: integer in: body required: true min_version: 1.38 description: > The number of consumers of a particular ``consumer_type``. consumer_generation: &consumer_generation type: integer in: body required: true description: > The generation of the consumer. Should be set to ``null`` when indicating that the caller expects the consumer does not yet exist. consumer_generation_get: <<: *consumer_generation description: > The generation of the consumer. Will be absent when listing allocations for a consumer uuid that has no allocations. min_version: 1.28 consumer_generation_min: <<: *consumer_generation min_version: 1.28 consumer_type: type: string in: body required: true min_version: 1.38 description: > A string that consists of numbers, ``A-Z``, and ``_`` describing what kind of consumer is creating, or has created, allocations using a quantity of inventory. The string is determined by the client when writing allocations and it is up to the client to ensure correct choices amongst collaborating services. For example, the compute service may choose to type some consumers 'INSTANCE' and others 'MIGRATION'. consumer_uuid_body: <<: *consumer_uuid in: body inventories: type: object in: body required: true description: > A dictionary of inventories keyed by resource classes. mappings: &mappings type: object in: body required: true description: > A dictionary associating request group suffixes with a list of uuids identifying the resource providers that satisfied each group. The empty string and ``[a-zA-Z0-9_-]+`` are valid suffixes. This field may be sent when writing allocations back to the server but will be ignored; this preserves symmetry between read and write representations. min_version: 1.34 mappings_in_allocations: <<: *mappings required: false max_unit: &max_unit type: integer in: body required: true description: > A maximum amount any single allocation against an inventory can have. max_unit_opt: <<: *max_unit required: false min_unit: &min_unit type: integer in: body required: true description: > A minimum amount any single allocation against an inventory can have. min_unit_opt: <<: *min_unit required: false project_id_body: &project_id_body <<: *project_id in: body project_id_body_1_12: <<: *project_id_body description: > The uuid of a project. Will be absent when listing allocations for a consumer uuid that has no allocations. min_version: 1.12 project_id_body_1_8: <<: *project_id_body min_version: 1.8 provider_summaries: type: object in: body required: true description: > A dictionary keyed by resource provider UUID included in the ``allocation_requests``, of dictionaries of inventory/capacity information. provider_summaries_1_12: type: object in: body required: true description: > A dictionary keyed by resource provider UUID included in the ``allocation_requests``, of dictionaries of inventory/capacity information. The list of traits the resource provider has associated with it is included in version 1.17 and above. Starting from microversion 1.29, the provider summaries include all resource providers in the same resource provider tree that has one or more resource providers included in the ``allocation_requests``. reserved: &reserved type: integer in: body required: true description: > The amount of the resource a provider has reserved for its own use. reserved_opt: <<: *reserved required: false description: > The amount of the resource a provider has reserved for its own use. Up to microversion 1.25, this value has to be less than the value of ``total``. Starting from microversion 1.26, this value has to be less than or equal to the value of ``total``. reshaper_allocations: type: object in: body required: true description: > A dictionary of multiple allocations, keyed by consumer uuid. Each collection of allocations describes the full set of allocations for each consumer. Each consumer allocations dict is itself a dictionary of resource allocations keyed by resource provider uuid. An empty dictionary indicates no change in existing allocations, whereas an empty ``allocations`` dictionary **within** a consumer dictionary indicates that all allocations for that consumer should be deleted. reshaper_inventories: type: object in: body required: true description: > A dictionary of multiple inventories, keyed by resource provider uuid. Each inventory describes the desired full inventory for each resource provider. An empty dictionary causes the inventory for that provider to be deleted. resource_class: <<: *resource_class_path in: body resource_class_custom: <<: *resource_class_path_custom in: body resource_class_links: type: array in: body required: true description: > A list of links associated with one resource class. resource_classes: type: array in: body required: true description: > A list of ``resource_class`` objects. resource_provider_allocations: type: object in: body required: true description: > A dictionary of allocation records keyed by consumer uuid. resource_provider_generation: &resource_provider_generation type: integer in: body required: true description: > A consistent view marker that assists with the management of concurrent resource provider updates. resource_provider_generation_optional: <<: *resource_provider_generation required: false description: > A consistent view marker that assists with the management of concurrent resource provider updates. The value is ignored; it is present to preserve symmetry between read and write representations. resource_provider_generation_v1_19: <<: *resource_provider_generation min_version: 1.19 resource_provider_links: &resource_provider_links type: array in: body required: true description: | A list of links associated with one resource provider. .. note:: Aggregates relationship link is available starting from version 1.1. Traits relationship link is available starting from version 1.6. Allocations relationship link is available starting from version 1.11. resource_provider_links_v1_20: <<: *resource_provider_links description: | A list of links associated with the resource provider. resource_provider_name: type: string in: body required: true description: > The name of one resource provider. resource_provider_object: type: object in: body required: true description: > A dictionary which contains the UUID of the resource provider. resource_provider_parent_provider_uuid_request: type: string in: body required: false description: | The UUID of the immediate parent of the resource provider. * Before version ``1.37``, once set, the parent of a resource provider cannot be changed. * Since version ``1.37``, it can be set to any existing provider UUID excepts to providers that would cause a loop. Also it can be set to null to transform the provider to a new root provider. This operation needs to be used carefully. Moving providers can mean that the original rules used to create the existing resource allocations may be invalidated by that move. min_version: 1.14 resource_provider_parent_provider_uuid_required_no_min: type: string in: body required: true description: > The UUID of the immediate parent of the resource provider. resource_provider_parent_provider_uuid_response_1_14: type: string in: body required: true description: > The UUID of the immediate parent of the resource provider. min_version: 1.14 resource_provider_parent_provider_uuid_response_1_29: type: string in: body required: true description: > The UUID of the immediate parent of the resource provider. min_version: 1.29 resource_provider_root_provider_uuid_1_29: type: string in: body required: true description: > UUID of the top-most provider in this provider tree. min_version: 1.29 resource_provider_root_provider_uuid_no_min: &resource_provider_root_provider_uuid_no_min type: string in: body required: true description: > UUID of the top-most provider in this provider tree. resource_provider_root_provider_uuid_required: <<: *resource_provider_root_provider_uuid_no_min description: > Read-only UUID of the top-most provider in this provider tree. min_version: 1.14 resource_provider_usages: type: object in: body required: true description: > The usage summary of the resource provider. This is a dictionary that describes how much each class of resource is being consumed on this resource provider. For example, ``"VCPU": 1`` means 1 VCPU is used. resource_provider_uuid: <<: *resource_provider_uuid_path in: body resource_provider_uuid_opt: <<: *resource_provider_uuid_path in: body required: false resource_providers: type: array in: body required: true description: > A list of ``resource_provider`` objects. resources: type: object in: body required: true description: > A dictionary of resource records keyed by resource class name. resources_single: type: integer in: body required: true description: > An amount of resource class consumed in a usage report. step_size: &step_size type: integer in: body required: true description: > A representation of the divisible amount of the resource that may be requested. For example, step_size = 5 means that only values divisible by 5 (5, 10, 15, etc.) can be requested. step_size_opt: <<: *step_size required: false total: type: integer in: body required: true description: > The actual amount of the resource that the provider can accommodate. traits: &traits type: array in: body required: true description: > A list of traits. traits_1_17: <<: *traits min_version: 1.17 used: type: integer in: body required: true description: > The amount of the resource that has been already allocated. user_id_body: &user_id_body <<: *user_id in: body required: true user_id_body_1_12: <<: *user_id_body description: > The uuid of a user. Will be absent when listing allocations for a consumer uuid that has no allocations. min_version: 1.12 user_id_body_1_8: <<: *user_id_body min_version: 1.8 version_id: type: string in: body required: true description: > A common name for the version being described. Informative only. version_links: type: array in: body required: true description: > A list of links related to and describing this version. version_max: type: string in: body required: true description: > The maximum microversion that is supported. version_min: type: string in: body required: true description: > The minimum microversion that is supported. version_status: type: string in: body required: true description: > The status of the version being described. With placement this is "CURRENT". versions: type: array in: body required: true description: > A list of version objects that describe the API versions available. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/request-ids.inc0000664000175000017500000000464600000000000023432 0ustar00zuulzuul00000000000000=========== Request IDs =========== All logs on the system, by default, include the global request ID and the local request ID when available. The local request ID is a unique ID locally generated by each service, and thus different for each service (Nova, Cinder, Glance, Neutron, etc.) involved in that operation. The format is ``req-`` + UUID (UUID4). The global request ID is a user-specified request ID which is utilized as a common identifier by all services. This request ID is same among all services involved in that operation. The format is ``req-`` + UUID (UUID4). This allows an administrator to track the API request processing as it transitions between all the different nova services or between nova and other component services called by Nova during that request. Even if specifying a global request ID in a request, users receive always a local request ID as the response. For more details about request IDs, please reference: `Faults `_ (It is *not* for Placement APIs, but there are some common points.) **Request** .. NOTE(takashin): The 'rest_parameters' directive needs the 'rest_method' directive before itself. But this file does not contain the 'rest_method' directive. So the 'rest_parameters' directive is not used. .. list-table:: :widths: 20 10 10 60 :header-rows: 1 * - Name - In - Type - Description * - X-Openstack-Request-Id (Optional) - header - string - The global request ID, which is a unique common ID for tracking each request in OpenStack components. The format of the global request ID must be ``req-`` + UUID (UUID4). If not in accordance with the format, it is ignored. It is associated with the request and appears in the log lines for that request. By default, the middleware configuration ensures that the global request ID appears in the log files. **Response** .. list-table:: :widths: 20 10 10 60 :header-rows: 1 * - Name - In - Type - Description * - X-Openstack-Request-Id - header - string - The local request ID, which is a unique ID generated automatically for tracking each request to placement. It is associated with the request and appears in the log lines for that request. By default, the middleware configuration ensures that the local request ID appears in the log files. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/reshaper.inc0000664000175000017500000000317200000000000022767 0ustar00zuulzuul00000000000000 .. _reshaper: ======== Reshaper ======== .. note:: Reshaper requests are available starting from version 1.30. Reshaper ======== Atomically migrate resource provider inventories and associated allocations. This is used when some of the inventory needs to move from one resource provider to another, such as when a class of inventory moves from a parent provider to a new child provider. .. note:: This is a special operation that should only be used in rare cases of resource provider topology changing when inventory is in use. Only use this if you are really sure of what you are doing. .. rest_method:: POST /reshaper Normal Response Codes: 204 Error Response Codes: badRequest(400), conflict(409) Request ------- .. rest_parameters:: parameters.yaml - inventories: reshaper_inventories - inventories.{resource_provider_uuid}.resource_provider_generation: resource_provider_generation - inventories.{resource_provider_uuid}.inventories: inventories - allocations: reshaper_allocations - allocations.{consumer_uuid}.allocations: allocations_dict_empty - allocations.{consumer_uuid}.allocations.{resource_provider_uuid}.resources: resources - allocations.{consumer_uuid}.project_id: project_id_body - allocations.{consumer_uuid}.user_id: user_id_body - allocations.{consumer_uuid}.mappings: mappings - allocations.{consumer_uuid}.consumer_generation: consumer_generation - allocations.{consumer_uuid}.consumer_type: consumer_type Request Example --------------- .. literalinclude:: ./samples/reshaper/post-reshaper-1.38.json :language: javascript No body content is returned on a successful POST. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/resource_class.inc0000664000175000017500000000620200000000000024167 0ustar00zuulzuul00000000000000============== Resource Class ============== See `resource classes`_ for a description. This group of API calls works with a single resource class identified by `name`. One resource class can be listed, updated and deleted. .. note:: Resource class API calls are available starting from version 1.2. Show resource class =================== .. rest_method:: GET /resource_classes/{name} Return a representation of the resource class identified by `{name}`. Normal Response Codes: 200 Error response codes: itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - name: resource_class_path Response -------- .. rest_parameters:: parameters.yaml - name: resource_class - links: resource_class_links Response Example ---------------- .. literalinclude:: ./samples/resource_classes/get-resource_class.json :language: javascript Update resource class ===================== .. rest_method:: PUT /resource_classes/{name} Create or validate the existence of single resource class identified by `{name}`. .. note:: Method is available starting from version 1.7. Normal Response Codes: 201, 204 A `201 Created` response code will be returned if the new resource class is successfully created. A `204 No Content` response code will be returned if the resource class already exists. Error response codes: badRequest(400) Request ------- .. rest_parameters:: parameters.yaml - name: resource_class_path_custom Response -------- .. rest_parameters:: parameters.yaml - Location: location No body content is returned on a successful PUT. Update resource class (microversions 1.2 - 1.6) =============================================== .. warning:: Changing resource class names using the <1.7 microversion is strongly discouraged. .. rest_method:: PUT /resource_classes/{name} Update the name of the resource class identified by `{name}`. Normal Response Codes: 200 Error response codes: badRequest(400), itemNotFound(404), conflict(409) A `409 Conflict` response code will be returned if another resource class exists with the provided name. Request ------- .. rest_parameters:: parameters.yaml - name: resource_class_path - name: resource_class_custom Request example --------------- .. literalinclude:: ./samples/resource_classes/update-resource_class-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - name: resource_class - links: resource_class_links Response Example ---------------- .. literalinclude:: ./samples/resource_classes/update-resource_class.json :language: javascript Delete resource class ===================== .. rest_method:: DELETE /resource_classes/{name} Delete the resource class identified by `{name}`. Normal Response Codes: 204 Error response codes: badRequest(400), itemNotFound(404), conflict(409) A `400 BadRequest` response code will be returned if trying to delete a standard resource class. A `409 Conflict` response code will be returned if there exist inventories for the resource class. Request ------- .. rest_parameters:: parameters.yaml - name: resource_class_path Response -------- No body content is returned on a successful DELETE. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/resource_classes.inc0000664000175000017500000000307100000000000024520 0ustar00zuulzuul00000000000000================ Resource Classes ================ Resource classes are entities that indicate standard or deployer-specific resources that can be provided by a resource provider. .. note:: Resource class API calls are available starting from version 1.2. List resource classes ===================== .. rest_method:: GET /resource_classes Return a list of all resource classes. Normal Response Codes: 200 Response -------- .. rest_parameters:: parameters.yaml - resource_classes: resource_classes - links: resource_class_links - name: resource_class Response Example ---------------- .. literalinclude:: ./samples/resource_classes/get-resource_classes.json :language: javascript Create resource class ===================== .. rest_method:: POST /resource_classes Create a new resource class. The new class must be a *custom* resource class, prefixed with `CUSTOM_` and distinct from the standard resource classes. Normal Response Codes: 201 Error response codes: badRequest(400), conflict(409) A `400 BadRequest` response code will be returned if the resource class does not have prefix `CUSTOM_`. A `409 Conflict` response code will be returned if another resource class exists with the provided name. Request ------- .. rest_parameters:: parameters.yaml - name: resource_class_custom Request example --------------- .. literalinclude:: ./samples/resource_classes/create-resource_classes-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - Location: location No body content is returned on a successful POST. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/resource_provider.inc0000664000175000017500000000577200000000000024727 0ustar00zuulzuul00000000000000================= Resource Provider ================= See `Resource providers`_ for a description. This group of API calls works with a single resource provider identified by `uuid`. One resource provider can be listed, updated and deleted. Show resource provider ====================== .. rest_method:: GET /resource_providers/{uuid} Return a representation of the resource provider identified by `{uuid}`. Normal Response Codes: 200 Error response codes: itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path Response -------- .. rest_parameters:: parameters.yaml - generation: resource_provider_generation - uuid: resource_provider_uuid - links: resource_provider_links - name: resource_provider_name - parent_provider_uuid: resource_provider_parent_provider_uuid_response_1_14 - root_provider_uuid: resource_provider_root_provider_uuid_required Response Example ---------------- .. literalinclude:: ./samples/resource_providers/get-resource_provider.json :language: javascript Update resource provider ======================== .. rest_method:: PUT /resource_providers/{uuid} Update the name of the resource provider identified by `{uuid}`. Normal Response Codes: 200 Error response codes: badRequest(400), itemNotFound(404), conflict(409) A `409 Conflict` response code will be returned if another resource provider exists with the provided name. Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path - name: resource_provider_name - parent_provider_uuid: resource_provider_parent_provider_uuid_request Request example --------------- .. literalinclude:: ./samples/resource_providers/update-resource_provider-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - generation: resource_provider_generation - uuid: resource_provider_uuid - links: resource_provider_links - name: resource_provider_name - parent_provider_uuid: resource_provider_parent_provider_uuid_response_1_14 - root_provider_uuid: resource_provider_root_provider_uuid_required Response Example ---------------- .. literalinclude:: ./samples/resource_providers/update-resource_provider.json :language: javascript Delete resource provider ======================== .. rest_method:: DELETE /resource_providers/{uuid} Delete the resource provider identified by `{uuid}`. This will also disassociate aggregates and delete inventories. Normal Response Codes: 204 Error response codes: itemNotFound(404), conflict(409) A `409 Conflict` response code will be returned if there exist allocations records for any of the inventories that would be deleted as a result of removing the resource provider. This error code will be also returned if there are existing child resource providers under the parent resource provider being deleted. Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path Response -------- No body content is returned on a successful DELETE. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/resource_provider_allocations.inc0000664000175000017500000000170400000000000027306 0ustar00zuulzuul00000000000000============================= Resource provider allocations ============================= See `Allocations`_ for a description. List resource provider allocations ================================== Return a representation of all allocations made against this resource provider, keyed by consumer uuid. Each allocation includes one or more classes of resource and the amount consumed. .. rest_method:: GET /resource_providers/{uuid}/allocations Normal Response Codes: 200 Error response codes: itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path Response -------- .. rest_parameters:: parameters.yaml - allocations: resource_provider_allocations - resources: resources - resource_provider_generation: resource_provider_generation Response Example ---------------- .. literalinclude:: ./samples/resource_provider_allocations/get-resource_provider_allocations.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/resource_provider_traits.inc0000664000175000017500000000570600000000000026312 0ustar00zuulzuul00000000000000======================== Resource provider traits ======================== See `Traits`_ for a description. This group of API requests queries/edits the association between traits and resource providers. .. note:: Traits API requests are available starting from version 1.6. List resource provider traits ============================= Return a list of traits for the resource provider identified by `{uuid}`. .. rest_method:: GET /resource_providers/{uuid}/traits Normal Response Codes: 200 Error response codes: itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path Response -------- .. rest_parameters:: parameters.yaml - traits: traits - resource_provider_generation: resource_provider_generation Response Example ---------------- .. literalinclude:: ./samples/resource_provider_traits/get-resource_provider-traits.json :language: javascript Update resource provider traits =============================== Associate traits with the resource provider identified by `{uuid}`. All the associated traits will be replaced by the traits specified in the request body. .. rest_method:: PUT /resource_providers/{uuid}/traits Normal Response Codes: 200 Error response codes: badRequest(400), itemNotFound(404), conflict(409) * `400 Bad Request` if any of the specified traits are not valid. The valid traits can be queried by `GET /traits`. * `409 Conflict` if the `resource_provider_generation` doesn't match with the server side. Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path - traits: traits - resource_provider_generation: resource_provider_generation Request example --------------- .. literalinclude:: ./samples/resource_provider_traits/update-resource_provider-traits-request.json :language: javascript Response -------- .. rest_parameters:: parameters.yaml - traits: traits - resource_provider_generation: resource_provider_generation Response Example ---------------- .. literalinclude:: ./samples/resource_provider_traits/update-resource_provider-traits.json :language: javascript Delete resource provider traits =============================== Dissociate all the traits from the resource provider identified by `{uuid}`. .. rest_method:: DELETE /resource_providers/{uuid}/traits Normal Response Codes: 204 Error response codes: itemNotFound(404), conflict(409) * `409 Conflict` if the provider's traits are updated by another thread while attempting the operation. .. note:: Since this request does not accept the resource provider generation, it is not safe to use when multiple threads are managing traits for a single provider. In such situations, use the ``PUT /resource_providers/{uuid}/traits`` API with an empty ``traits`` list. Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path Response -------- No body content is returned on a successful DELETE. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/resource_provider_usages.inc0000664000175000017500000000207500000000000026267 0ustar00zuulzuul00000000000000======================== Resource provider usages ======================== Show the consumption of resources for a resource provider in an aggregated form, i.e. without information for a particular consumer. See `Resource provider allocations`_. List resource provider usages ============================= Return a report of usage information for resources associated with the resource provider identified by `{uuid}`. The value is a dictionary of resource classes paired with the sum of the allocations of that resource class for this resource provider. .. rest_method:: GET /resource_providers/{uuid}/usages Normal Response Codes: 200 Error response codes: itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - uuid: resource_provider_uuid_path Response -------- .. rest_parameters:: parameters.yaml - resource_provider_generation: resource_provider_generation - usages: resource_provider_usages Response Example ---------------- .. literalinclude:: ./samples/resource_provider_usages/get-resource_provider_usages.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/resource_providers.inc0000664000175000017500000000612000000000000025076 0ustar00zuulzuul00000000000000================== Resource Providers ================== Resource providers are entities which provide consumable inventory of one or more classes of resource (such as disk or memory). They can be listed (with filters), created, updated and deleted. List resource providers ======================= .. rest_method:: GET /resource_providers List an optionally filtered collection of resource providers. Normal Response Codes: 200 Error response codes: badRequest(400) A `400 BadRequest` response code will be returned if a resource class specified in ``resources`` request parameter does not exist. Request ------- Several query parameters are available to filter the returned list of resource providers. If multiple different parameters are provided, the results of all filters are merged with a boolean `AND`. .. rest_parameters:: parameters.yaml - name: resource_provider_name_query - uuid: resource_provider_uuid_query - member_of: resource_provider_member_of - resources: resources_query_1_4 - in_tree: resource_provider_tree_query - required: resource_provider_required_query Response -------- .. rest_parameters:: parameters.yaml - resource_providers: resource_providers - generation: resource_provider_generation - uuid: resource_provider_uuid - links: resource_provider_links - name: resource_provider_name - parent_provider_uuid: resource_provider_parent_provider_uuid_response_1_14 - root_provider_uuid: resource_provider_root_provider_uuid_required Response Example ---------------- .. literalinclude:: ./samples/resource_providers/get-resource_providers.json :language: javascript Create resource provider ======================== .. rest_method:: POST /resource_providers Create a new resource provider. Normal Response Codes: 201 (microversions 1.0 - 1.19), 200 (microversions 1.20 - ) Error response codes: conflict(409) A `409 Conflict` response code will be returned if another resource provider exists with the provided name or uuid. Request ------- .. rest_parameters:: parameters.yaml - name: resource_provider_name - uuid: resource_provider_uuid_opt - parent_provider_uuid: resource_provider_parent_provider_uuid_request Request example --------------- .. literalinclude:: ./samples/resource_providers/create-resource_providers-request.json :language: javascript Response (microversions 1.0 - 1.19) ----------------------------------- .. rest_parameters:: parameters.yaml - Location: location No body content is returned on a successful POST. Response (microversions 1.20 - ) -------------------------------- .. rest_parameters:: parameters.yaml - Location: location - generation: resource_provider_generation - uuid: resource_provider_uuid - links: resource_provider_links_v1_20 - name: resource_provider_name - parent_provider_uuid: resource_provider_parent_provider_uuid_required_no_min - root_provider_uuid: resource_provider_root_provider_uuid_no_min Response Example (microversions 1.20 - ) ---------------------------------------- .. literalinclude:: ./samples/resource_providers/create-resource_provider.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/root.inc0000664000175000017500000000261200000000000022137 0ustar00zuulzuul00000000000000============ API Versions ============ In order to bring new features to users over time, the Placement API supports microversioning. Microversions allow use of certain features on a per-request basis via the ``OpenStack-API-Version`` header. For example, to request microversion 1.10, specify the header:: OpenStack-API-Version: placement 1.10 For more details about Microversions, please reference: `Microversion Specification `_ .. note:: The maximum microversion supported by each release varies. Please reference: `REST API Version History `__ for API microversion history details. List Versions ============= .. rest_method:: GET / Fetch information about all known major versions of the placement API, including information about the minimum and maximum microversions. .. note:: At this time there is only one major version of the placement API: version 1.0. Normal Response Codes: 200 Response -------- .. rest_parameters:: parameters.yaml - versions: versions - id: version_id - min_version: version_min - max_version: version_max - status: version_status - links: version_links Response Example ---------------- .. literalinclude:: ./samples/root/get-root.json :language: javascript ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2087777 openstack_placement-13.0.0/api-ref/source/samples/0000775000175000017500000000000000000000000022124 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.228778 openstack_placement-13.0.0/api-ref/source/samples/aggregates/0000775000175000017500000000000000000000000024235 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/aggregates/get-aggregates-1.19.json0000664000175000017500000000024400000000000030404 0ustar00zuulzuul00000000000000{ "aggregates": [ "42896e0d-205d-4fe3-bd1e-100924931787", "5e08ea53-c4c6-448e-9334-ac4953de3cfa" ], "resource_provider_generation": 8 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/aggregates/get-aggregates.json0000664000175000017500000000017500000000000030021 0ustar00zuulzuul00000000000000{ "aggregates": [ "42896e0d-205d-4fe3-bd1e-100924931787", "5e08ea53-c4c6-448e-9334-ac4953de3cfa" ] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/aggregates/update-aggregates-1.19.json0000664000175000017500000000024400000000000031107 0ustar00zuulzuul00000000000000{ "aggregates": [ "42896e0d-205d-4fe3-bd1e-100924931787", "5e08ea53-c4c6-448e-9334-ac4953de3cfa" ], "resource_provider_generation": 9 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/aggregates/update-aggregates-request-1.19.json0000664000175000017500000000024400000000000032575 0ustar00zuulzuul00000000000000{ "aggregates": [ "42896e0d-205d-4fe3-bd1e-100924931787", "5e08ea53-c4c6-448e-9334-ac4953de3cfa" ], "resource_provider_generation": 9 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/aggregates/update-aggregates-request.json0000664000175000017500000000013300000000000032204 0ustar00zuulzuul00000000000000[ "42896e0d-205d-4fe3-bd1e-100924931787", "5e08ea53-c4c6-448e-9334-ac4953de3cfa" ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/aggregates/update-aggregates.json0000664000175000017500000000017500000000000030524 0ustar00zuulzuul00000000000000{ "aggregates": [ "42896e0d-205d-4fe3-bd1e-100924931787", "5e08ea53-c4c6-448e-9334-ac4953de3cfa" ] } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.228778 openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/0000775000175000017500000000000000000000000026430 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.12.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.0000664000175000017500000000347100000000000033655 0ustar00zuulzuul00000000000000{ "allocation_requests": [ { "allocations": { "a99bad54-a275-4c4f-a8a3-ac00d57e5c64": { "resources": { "DISK_GB": 100 } }, "35791f28-fb45-4717-9ea9-435b3ef7c3b3": { "resources": { "VCPU": 1, "MEMORY_MB": 1024 } } } }, { "allocations": { "a99bad54-a275-4c4f-a8a3-ac00d57e5c64": { "resources": { "DISK_GB": 100 } }, "915ef8ed-9b91-4e38-8802-2e4224ad54cd": { "resources": { "VCPU": 1, "MEMORY_MB": 1024 } } } } ], "provider_summaries": { "a99bad54-a275-4c4f-a8a3-ac00d57e5c64": { "resources": { "DISK_GB": { "used": 0, "capacity": 1900 } } }, "915ef8ed-9b91-4e38-8802-2e4224ad54cd": { "resources": { "VCPU": { "used": 0, "capacity": 384 }, "MEMORY_MB": { "used": 0, "capacity": 196608 } } }, "35791f28-fb45-4717-9ea9-435b3ef7c3b3": { "resources": { "VCPU": { "used": 0, "capacity": 384 }, "MEMORY_MB": { "used": 0, "capacity": 196608 } } } } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.17.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.0000664000175000017500000000367100000000000033657 0ustar00zuulzuul00000000000000{ "allocation_requests": [ { "allocations": { "a99bad54-a275-4c4f-a8a3-ac00d57e5c64": { "resources": { "DISK_GB": 100 } }, "35791f28-fb45-4717-9ea9-435b3ef7c3b3": { "resources": { "VCPU": 1, "MEMORY_MB": 1024 } } } }, { "allocations": { "a99bad54-a275-4c4f-a8a3-ac00d57e5c64": { "resources": { "DISK_GB": 100 } }, "915ef8ed-9b91-4e38-8802-2e4224ad54cd": { "resources": { "VCPU": 1, "MEMORY_MB": 1024 } } } } ], "provider_summaries": { "a99bad54-a275-4c4f-a8a3-ac00d57e5c64": { "resources": { "DISK_GB": { "used": 0, "capacity": 1900 } }, "traits": ["HW_CPU_X86_SSE2", "HW_CPU_X86_AVX2"] }, "915ef8ed-9b91-4e38-8802-2e4224ad54cd": { "resources": { "VCPU": { "used": 0, "capacity": 384 }, "MEMORY_MB": { "used": 0, "capacity": 196608 } }, "traits": ["HW_NIC_SRIOV"] }, "35791f28-fb45-4717-9ea9-435b3ef7c3b3": { "resources": { "VCPU": { "used": 0, "capacity": 384 }, "MEMORY_MB": { "used": 0, "capacity": 196608 } }, "traits": [] } } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.29.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.0000664000175000017500000000527300000000000033657 0ustar00zuulzuul00000000000000{ "allocation_requests": [ { "allocations": { "a99bad54-a275-4c4f-a8a3-ac00d57e5c64": { "resources": { "DISK_GB": 100 } }, "35791f28-fb45-4717-9ea9-435b3ef7c3b3": { "resources": { "VCPU": 1, "MEMORY_MB": 1024 } } } }, { "allocations": { "a99bad54-a275-4c4f-a8a3-ac00d57e5c64": { "resources": { "DISK_GB": 100 } }, "915ef8ed-9b91-4e38-8802-2e4224ad54cd": { "resources": { "VCPU": 1, "MEMORY_MB": 1024 } } } } ], "provider_summaries": { "a99bad54-a275-4c4f-a8a3-ac00d57e5c64": { "resources": { "DISK_GB": { "used": 0, "capacity": 1900 } }, "traits": ["MISC_SHARES_VIA_AGGREGATE"], "parent_provider_uuid": null, "root_provider_uuid": "a99bad54-a275-4c4f-a8a3-ac00d57e5c64" }, "35791f28-fb45-4717-9ea9-435b3ef7c3b3": { "resources": { "VCPU": { "used": 0, "capacity": 384 }, "MEMORY_MB": { "used": 0, "capacity": 196608 } }, "traits": ["HW_CPU_X86_SSE2", "HW_CPU_X86_AVX2"], "parent_provider_uuid": null, "root_provider_uuid": "35791f28-fb45-4717-9ea9-435b3ef7c3b3" }, "915ef8ed-9b91-4e38-8802-2e4224ad54cd": { "resources": { "VCPU": { "used": 0, "capacity": 384 }, "MEMORY_MB": { "used": 0, "capacity": 196608 } }, "traits": ["HW_NIC_SRIOV"], "parent_provider_uuid": null, "root_provider_uuid": "915ef8ed-9b91-4e38-8802-2e4224ad54cd" }, "f5120cad-67d9-4f20-9210-3092a79a28cf": { "resources": { "SRIOV_NET_VF": { "used": 0, "capacity": 8 } }, "traits": [], "parent_provider_uuid": "915ef8ed-9b91-4e38-8802-2e4224ad54cd", "root_provider_uuid": "915ef8ed-9b91-4e38-8802-2e4224ad54cd" } } } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.34.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.0000664000175000017500000000466200000000000033660 0ustar00zuulzuul00000000000000{ "allocation_requests": [ { "allocations": { "92e971c9-777a-48bf-a181-a2ca1105c015": { "resources": { "NET_BW_EGR_KILOBIT_PER_SEC": 10 } }, "cefbdf54-05a8-4db4-ad2b-d6729e5a4de8": { "resources": { "NET_BW_EGR_KILOBIT_PER_SEC": 20 } }, "9a9c6b0f-e8d1-4d16-b053-a2bfe8a76757": { "resources": { "VCPU": 1 } } }, "mappings": { "_NET1": [ "92e971c9-777a-48bf-a181-a2ca1105c015" ], "_NET2": [ "cefbdf54-05a8-4db4-ad2b-d6729e5a4de8" ], "": [ "9a9c6b0f-e8d1-4d16-b053-a2bfe8a76757" ] } } ], "provider_summaries": { "be99627d-e848-44ef-8341-683e2e557c58": { "resources": {}, "traits": [ "COMPUTE_VOLUME_MULTI_ATTACH" ], "parent_provider_uuid": null, "root_provider_uuid": "be99627d-e848-44ef-8341-683e2e557c58" }, "9a9c6b0f-e8d1-4d16-b053-a2bfe8a76757": { "resources": { "VCPU": { "capacity": 4, "used": 0 }, "MEMORY_MB": { "capacity": 2048, "used": 0 } }, "traits": [ "HW_NUMA_ROOT", "CUSTOM_FOO" ], "parent_provider_uuid": "be99627d-e848-44ef-8341-683e2e557c58", "root_provider_uuid": "be99627d-e848-44ef-8341-683e2e557c58" }, "ba415f98-1960-4488-b2ed-4518b77eaa60": { "resources": {}, "traits": [ "CUSTOM_VNIC_TYPE_DIRECT" ], "parent_provider_uuid": "be99627d-e848-44ef-8341-683e2e557c58", "root_provider_uuid": "be99627d-e848-44ef-8341-683e2e557c58" }, "92e971c9-777a-48bf-a181-a2ca1105c015": { "resources": { "NET_BW_EGR_KILOBIT_PER_SEC": { "capacity": 10000, "used": 0 } }, "traits": [ "CUSTOM_PHYSNET1" ], "parent_provider_uuid": "ba415f98-1960-4488-b2ed-4518b77eaa60", "root_provider_uuid": "be99627d-e848-44ef-8341-683e2e557c58" }, "cefbdf54-05a8-4db4-ad2b-d6729e5a4de8": { "resources": { "NET_BW_EGR_KILOBIT_PER_SEC": { "capacity": 20000, "used": 0 } }, "traits": [ "CUSTOM_PHYSNET2" ], "parent_provider_uuid": "ba415f98-1960-4488-b2ed-4518b77eaa60", "root_provider_uuid": "be99627d-e848-44ef-8341-683e2e557c58" } } } ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocation_candidates/get-allocation_candidates.js0000664000175000017500000000152600000000000034053 0ustar00zuulzuul00000000000000{ "allocation_requests": [ { "allocations": [ { "resource_provider": { "uuid": "30742363-f65e-4012-a60a-43e0bec38f0e" }, "resources": { "MEMORY_MB": 512 } } ] } ], "provider_summaries": { "30742363-f65e-4012-a60a-43e0bec38f0e": { "resources": { "DISK_GB": { "capacity": 77, "used": 0 }, "MEMORY_MB": { "capacity": 11206, "used": 256 }, "VCPU": { "capacity": 64, "used": 0 } } } } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.228778 openstack_placement-13.0.0/api-ref/source/samples/allocations/0000775000175000017500000000000000000000000024434 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/get-allocations-1.28.json0000664000175000017500000000101400000000000030776 0ustar00zuulzuul00000000000000{ "allocations": { "92637880-2d79-43c6-afab-d860886c6391": { "generation": 2, "resources": { "DISK_GB": 5 } }, "ba8e1ef8-7fa3-41a4-9bb4-d7cb2019899b": { "generation": 8, "resources": { "MEMORY_MB": 512, "VCPU": 2 } } }, "consumer_generation": 1, "project_id": "7e67cbf7-7c38-4a32-b85b-0739c690991a", "user_id": "067f691e-725a-451a-83e2-5c3d13e1dffc" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/get-allocations-1.38.json0000664000175000017500000000105600000000000031005 0ustar00zuulzuul00000000000000{ "allocations": { "92637880-2d79-43c6-afab-d860886c6391": { "generation": 2, "resources": { "DISK_GB": 5 } }, "ba8e1ef8-7fa3-41a4-9bb4-d7cb2019899b": { "generation": 8, "resources": { "MEMORY_MB": 512, "VCPU": 2 } } }, "consumer_generation": 1, "project_id": "7e67cbf7-7c38-4a32-b85b-0739c690991a", "user_id": "067f691e-725a-451a-83e2-5c3d13e1dffc", "consumer_type": "INSTANCE" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/get-allocations.json0000664000175000017500000000075600000000000030424 0ustar00zuulzuul00000000000000{ "allocations": { "92637880-2d79-43c6-afab-d860886c6391": { "generation": 2, "resources": { "DISK_GB": 5 } }, "ba8e1ef8-7fa3-41a4-9bb4-d7cb2019899b": { "generation": 8, "resources": { "MEMORY_MB": 512, "VCPU": 2 } } }, "project_id": "7e67cbf7-7c38-4a32-b85b-0739c690991a", "user_id": "067f691e-725a-451a-83e2-5c3d13e1dffc" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/manage-allocations-request-1.28.json0000664000175000017500000000172100000000000033142 0ustar00zuulzuul00000000000000{ "30328d13-e299-4a93-a102-61e4ccabe474": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "allocations": { "e10927c4-8bc9-465d-ac60-d2f79f7e4a00": { "resources": { "VCPU": 2, "MEMORY_MB": 3 }, "generation": 4 } } }, "71921e4e-1629-4c5b-bf8d-338d915d2ef3": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "allocations": {} }, "48c1d40f-45d8-4947-8d46-52b4e1326df8": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "allocations": { "e10927c4-8bc9-465d-ac60-d2f79f7e4a00": { "resources": { "VCPU": 4, "MEMORY_MB": 5 }, "generation": 12 } } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/manage-allocations-request-1.38.json0000664000175000017500000000206600000000000033146 0ustar00zuulzuul00000000000000{ "30328d13-e299-4a93-a102-61e4ccabe474": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "allocations": { "e10927c4-8bc9-465d-ac60-d2f79f7e4a00": { "resources": { "VCPU": 2, "MEMORY_MB": 3 }, "generation": 4 } }, "consumer_type": "INSTANCE" }, "71921e4e-1629-4c5b-bf8d-338d915d2ef3": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "allocations": {}, "consumer_type": "MIGRATION" }, "48c1d40f-45d8-4947-8d46-52b4e1326df8": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "allocations": { "e10927c4-8bc9-465d-ac60-d2f79f7e4a00": { "resources": { "VCPU": 4, "MEMORY_MB": 5 }, "generation": 12 } }, "consumer_type": "INSTANCE" } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/manage-allocations-request.json0000664000175000017500000000150300000000000032552 0ustar00zuulzuul00000000000000{ "30328d13-e299-4a93-a102-61e4ccabe474": { "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "allocations": { "e10927c4-8bc9-465d-ac60-d2f79f7e4a00": { "resources": { "VCPU": 2, "MEMORY_MB": 3 } } } }, "71921e4e-1629-4c5b-bf8d-338d915d2ef3": { "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "allocations": {} }, "48c1d40f-45d8-4947-8d46-52b4e1326df8": { "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "allocations": { "e10927c4-8bc9-465d-ac60-d2f79f7e4a00": { "resources": { "VCPU": 4, "MEMORY_MB": 5 } } } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/update-allocations-request-1.12.json0000664000175000017500000000055600000000000033172 0ustar00zuulzuul00000000000000{ "allocations": { "4e061c03-611e-4caa-bf26-999dcff4284e": { "resources": { "DISK_GB": 20 } }, "89873422-1373-46e5-b467-f0c5e6acf08f": { "resources": { "MEMORY_MB": 1024, "VCPU": 1 } } }, "user_id": "66cb2f29-c86d-47c3-8af5-69ae7b778c70", "project_id": "42a32c07-3eeb-4401-9373-68a8cdca6784" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/update-allocations-request-1.28.json0000664000175000017500000000061200000000000033172 0ustar00zuulzuul00000000000000{ "allocations": { "4e061c03-611e-4caa-bf26-999dcff4284e": { "resources": { "DISK_GB": 20 } }, "89873422-1373-46e5-b467-f0c5e6acf08f": { "resources": { "MEMORY_MB": 1024, "VCPU": 1 } } }, "consumer_generation": 1, "user_id": "66cb2f29-c86d-47c3-8af5-69ae7b778c70", "project_id": "42a32c07-3eeb-4401-9373-68a8cdca6784" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/update-allocations-request-1.38.json0000664000175000017500000000065200000000000033177 0ustar00zuulzuul00000000000000{ "allocations": { "4e061c03-611e-4caa-bf26-999dcff4284e": { "resources": { "DISK_GB": 20 } }, "89873422-1373-46e5-b467-f0c5e6acf08f": { "resources": { "MEMORY_MB": 1024, "VCPU": 1 } } }, "consumer_generation": 1, "user_id": "66cb2f29-c86d-47c3-8af5-69ae7b778c70", "project_id": "42a32c07-3eeb-4401-9373-68a8cdca6784", "consumer_type": "INSTANCE" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/allocations/update-allocations-request.json0000664000175000017500000000110600000000000032603 0ustar00zuulzuul00000000000000{ "allocations": [ { "resource_provider": { "uuid": "844ac34d-620e-474c-833c-4c9921251353" }, "resources": { "MEMORY_MB": 512, "VCPU": 2 } }, { "resource_provider": { "uuid": "92637880-2d79-43c6-afab-d860886c6391" }, "resources": { "DISK_GB": 5 } } ], "project_id": "6e3b2ce9-9175-4830-a862-b9de690bdceb", "user_id": "81c516e3-5e0e-4dcb-9a38-4473d229a950" } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.228778 openstack_placement-13.0.0/api-ref/source/samples/inventories/0000775000175000017500000000000000000000000024471 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/inventories/get-inventories.json0000664000175000017500000000125200000000000030506 0ustar00zuulzuul00000000000000{ "inventories": { "DISK_GB": { "allocation_ratio": 1.0, "max_unit": 35, "min_unit": 1, "reserved": 0, "step_size": 1, "total": 35 }, "MEMORY_MB": { "allocation_ratio": 1.5, "max_unit": 5825, "min_unit": 1, "reserved": 512, "step_size": 1, "total": 5825 }, "VCPU": { "allocation_ratio": 16.0, "max_unit": 4, "min_unit": 1, "reserved": 0, "step_size": 1, "total": 4 } }, "resource_provider_generation": 7 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/inventories/get-inventory.json0000664000175000017500000000024500000000000030177 0ustar00zuulzuul00000000000000{ "allocation_ratio": 16.0, "max_unit": 4, "min_unit": 1, "reserved": 0, "resource_provider_generation": 9, "step_size": 1, "total": 4 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/inventories/update-inventories-request.json0000664000175000017500000000052300000000000032677 0ustar00zuulzuul00000000000000{ "inventories": { "MEMORY_MB": { "allocation_ratio": 2.0, "max_unit": 16, "step_size": 4, "total": 128 }, "VCPU": { "allocation_ratio": 10.0, "reserved": 2, "total": 64 } }, "resource_provider_generation": 1 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/inventories/update-inventories.json0000664000175000017500000000074400000000000031216 0ustar00zuulzuul00000000000000{ "inventories": { "MEMORY_MB": { "allocation_ratio": 2.0, "max_unit": 16, "min_unit": 1, "reserved": 0, "step_size": 4, "total": 128 }, "VCPU": { "allocation_ratio": 10.0, "max_unit": 2147483647, "min_unit": 1, "reserved": 2, "step_size": 1, "total": 64 } }, "resource_provider_generation": 2 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/inventories/update-inventory-request.json0000664000175000017500000000007300000000000032367 0ustar00zuulzuul00000000000000{ "resource_provider_generation": 7, "total": 50 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/inventories/update-inventory.json0000664000175000017500000000025600000000000030704 0ustar00zuulzuul00000000000000{ "allocation_ratio": 1.0, "max_unit": 2147483647, "min_unit": 1, "reserved": 0, "resource_provider_generation": 8, "step_size": 1, "total": 50 } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.228778 openstack_placement-13.0.0/api-ref/source/samples/reshaper/0000775000175000017500000000000000000000000023735 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/reshaper/post-reshaper-1.30.json0000664000175000017500000000324300000000000030005 0ustar00zuulzuul00000000000000{ "allocations": { "9ae60315-80c2-48a0-a168-ca4f27c307e1": { "allocations": { "a7466641-cd72-499b-b6c9-c208eacecb3d": { "resources": { "DISK_GB": 1000 } } }, "project_id": "2f0c4ffc-4c4d-407a-b334-56297b871b7f", "user_id": "cc8a0fe0-2b7c-4392-ae51-747bc73cf473", "consumer_generation": 1 }, "4a6444e5-10d6-43f6-9a0b-8acce9309ac9": { "allocations": { "c4ddddbb-01ee-4814-85c9-f57a962c22ba": { "resources": { "VCPU": 1 } }, "a7466641-cd72-499b-b6c9-c208eacecb3d": { "resources": { "DISK_GB": 20 } } }, "project_id": "2f0c4ffc-4c4d-407a-b334-56297b871b7f", "user_id": "406e1095-71cb-47b9-9b3c-aedb7f663f5a", "consumer_generation": 1 }, "e10e7ca0-2ac5-4c98-bad9-51c95b1930ed": { "allocations": { "c4ddddbb-01ee-4814-85c9-f57a962c22ba": { "resources": { "VCPU": 8 } } }, "project_id": "2f0c4ffc-4c4d-407a-b334-56297b871b7f", "user_id": "cc8a0fe0-2b7c-4392-ae51-747bc73cf473", "consumer_generation": 1 } }, "inventories": { "c4ddddbb-01ee-4814-85c9-f57a962c22ba": { "inventories": { "VCPU": { "max_unit": 8, "total": 10 } }, "resource_provider_generation": null }, "a7466641-cd72-499b-b6c9-c208eacecb3d": { "inventories": { "DISK_GB": { "min_unit": 10, "total": 2048, "max_unit": 1200, "step_size": 10 } }, "resource_provider_generation": 5 } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/reshaper/post-reshaper-1.38.json0000664000175000017500000000341500000000000030016 0ustar00zuulzuul00000000000000{ "allocations": { "9ae60315-80c2-48a0-a168-ca4f27c307e1": { "allocations": { "a7466641-cd72-499b-b6c9-c208eacecb3d": { "resources": { "DISK_GB": 1000 } } }, "project_id": "2f0c4ffc-4c4d-407a-b334-56297b871b7f", "user_id": "cc8a0fe0-2b7c-4392-ae51-747bc73cf473", "consumer_generation": 1, "consumer_type": "INSTANCE" }, "4a6444e5-10d6-43f6-9a0b-8acce9309ac9": { "allocations": { "c4ddddbb-01ee-4814-85c9-f57a962c22ba": { "resources": { "VCPU": 1 } }, "a7466641-cd72-499b-b6c9-c208eacecb3d": { "resources": { "DISK_GB": 20 } } }, "project_id": "2f0c4ffc-4c4d-407a-b334-56297b871b7f", "user_id": "406e1095-71cb-47b9-9b3c-aedb7f663f5a", "consumer_generation": 1, "consumer_type": "INSTANCE" }, "e10e7ca0-2ac5-4c98-bad9-51c95b1930ed": { "allocations": { "c4ddddbb-01ee-4814-85c9-f57a962c22ba": { "resources": { "VCPU": 8 } } }, "project_id": "2f0c4ffc-4c4d-407a-b334-56297b871b7f", "user_id": "cc8a0fe0-2b7c-4392-ae51-747bc73cf473", "consumer_generation": 1, "consumer_type": "INSTANCE" } }, "inventories": { "c4ddddbb-01ee-4814-85c9-f57a962c22ba": { "inventories": { "VCPU": { "max_unit": 8, "total": 10 } }, "resource_provider_generation": null }, "a7466641-cd72-499b-b6c9-c208eacecb3d": { "inventories": { "DISK_GB": { "min_unit": 10, "total": 2048, "max_unit": 1200, "step_size": 10 } }, "resource_provider_generation": 5 } } } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.228778 openstack_placement-13.0.0/api-ref/source/samples/resource_classes/0000775000175000017500000000000000000000000025470 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=openstack_placement-13.0.0/api-ref/source/samples/resource_classes/create-resource_classes-request.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_classes/create-resource_classes-request.j0000664000175000017500000000003000000000000034127 0ustar00zuulzuul00000000000000{"name": "CUSTOM_FPGA"} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_classes/get-resource_class.json0000664000175000017500000000024100000000000032151 0ustar00zuulzuul00000000000000{ "links": [ { "href": "/placement/resource_classes/CUSTOM_FPGA", "rel": "self" } ], "name": "CUSTOM_FPGA" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_classes/get-resource_classes.json0000664000175000017500000000446100000000000032511 0ustar00zuulzuul00000000000000{ "resource_classes": [ { "links": [ { "href": "/placement/resource_classes/VCPU", "rel": "self" } ], "name": "VCPU" }, { "links": [ { "href": "/placement/resource_classes/MEMORY_MB", "rel": "self" } ], "name": "MEMORY_MB" }, { "links": [ { "href": "/placement/resource_classes/DISK_GB", "rel": "self" } ], "name": "DISK_GB" }, { "links": [ { "href": "/placement/resource_classes/PCI_DEVICE", "rel": "self" } ], "name": "PCI_DEVICE" }, { "links": [ { "href": "/placement/resource_classes/SRIOV_NET_VF", "rel": "self" } ], "name": "SRIOV_NET_VF" }, { "links": [ { "href": "/placement/resource_classes/NUMA_SOCKET", "rel": "self" } ], "name": "NUMA_SOCKET" }, { "links": [ { "href": "/placement/resource_classes/NUMA_CORE", "rel": "self" } ], "name": "NUMA_CORE" }, { "links": [ { "href": "/placement/resource_classes/NUMA_THREAD", "rel": "self" } ], "name": "NUMA_THREAD" }, { "links": [ { "href": "/placement/resource_classes/NUMA_MEMORY_MB", "rel": "self" } ], "name": "NUMA_MEMORY_MB" }, { "links": [ { "href": "/placement/resource_classes/IPV4_ADDRESS", "rel": "self" } ], "name": "IPV4_ADDRESS" } ] } ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=openstack_placement-13.0.0/api-ref/source/samples/resource_classes/update-resource_class-request.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_classes/update-resource_class-request.jso0000664000175000017500000000003300000000000034163 0ustar00zuulzuul00000000000000{"name": "CUSTOM_FPGA_V2"} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_classes/update-resource_class.json0000664000175000017500000000024700000000000032662 0ustar00zuulzuul00000000000000{ "links": [ { "href": "/placement/resource_classes/CUSTOM_FPGA_V2", "rel": "self" } ], "name": "CUSTOM_FPGA_V2" } ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.228778 openstack_placement-13.0.0/api-ref/source/samples/resource_provider_allocations/0000775000175000017500000000000000000000000030255 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=openstack_placement-13.0.0/api-ref/source/samples/resource_provider_allocations/get-resource_provider_allocations.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_provider_allocations/get-resource_provide0000664000175000017500000000105500000000000034335 0ustar00zuulzuul00000000000000{ "allocations": { "56785a3f-6f1c-4fec-af0b-0faf075b1fcb": { "resources": { "MEMORY_MB": 256, "VCPU": 1 } }, "9afd5aeb-d6b9-4dea-a588-1e6327a91834": { "resources": { "MEMORY_MB": 512, "VCPU": 2 } }, "9d16a611-e7f9-4ef3-be26-c61ed01ecefb": { "resources": { "MEMORY_MB": 1024, "VCPU": 1 } } }, "resource_provider_generation": 12 } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/api-ref/source/samples/resource_provider_traits/0000775000175000017500000000000000000000000027253 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=openstack_placement-13.0.0/api-ref/source/samples/resource_provider_traits/get-resource_provider-traits.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_provider_traits/get-resource_provider-tra0000664000175000017500000000020200000000000034272 0ustar00zuulzuul00000000000000{ "resource_provider_generation": 1, "traits": [ "CUSTOM_HW_FPGA_CLASS1", "CUSTOM_HW_FPGA_CLASS3" ] } ././@PaxHeader0000000000000000000000000000022700000000000011456 xustar0000000000000000129 path=openstack_placement-13.0.0/api-ref/source/samples/resource_provider_traits/update-resource_provider-traits-request.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_provider_traits/update-resource_provider-0000664000175000017500000000020200000000000034266 0ustar00zuulzuul00000000000000{ "resource_provider_generation": 0, "traits": [ "CUSTOM_HW_FPGA_CLASS1", "CUSTOM_HW_FPGA_CLASS3" ] } ././@PaxHeader0000000000000000000000000000021700000000000011455 xustar0000000000000000121 path=openstack_placement-13.0.0/api-ref/source/samples/resource_provider_traits/update-resource_provider-traits.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_provider_traits/update-resource_provider-0000664000175000017500000000020200000000000034266 0ustar00zuulzuul00000000000000{ "resource_provider_generation": 1, "traits": [ "CUSTOM_HW_FPGA_CLASS1", "CUSTOM_HW_FPGA_CLASS3" ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/api-ref/source/samples/resource_provider_usages/0000775000175000017500000000000000000000000027234 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=openstack_placement-13.0.0/api-ref/source/samples/resource_provider_usages/get-resource_provider_usages.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_provider_usages/get-resource_provider_usa0000664000175000017500000000020300000000000034340 0ustar00zuulzuul00000000000000{ "resource_provider_generation": 1, "usages": { "DISK_GB": 1, "MEMORY_MB": 512, "VCPU": 1 } } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/api-ref/source/samples/resource_providers/0000775000175000017500000000000000000000000026050 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_providers/create-resource_provider.json0000664000175000017500000000217700000000000033754 0ustar00zuulzuul00000000000000{ "generation": 0, "links": [ { "href": "/placement/resource_providers/7d2590ae-fb85-4080-9306-058b4c915e3f", "rel": "self" }, { "href": "/placement/resource_providers/7d2590ae-fb85-4080-9306-058b4c915e3f/aggregates", "rel": "aggregates" }, { "href": "/placement/resource_providers/7d2590ae-fb85-4080-9306-058b4c915e3f/inventories", "rel": "inventories" }, { "href": "/placement/resource_providers/7d2590ae-fb85-4080-9306-058b4c915e3f/usages", "rel": "usages" }, { "href": "/placement/resource_providers/7d2590ae-fb85-4080-9306-058b4c915e3f/traits", "rel": "traits" }, { "href": "/placement/resource_providers/7d2590ae-fb85-4080-9306-058b4c915e3f/allocations", "rel": "allocations" } ], "name": "NFS Share", "uuid": "7d2590ae-fb85-4080-9306-058b4c915e3f", "parent_provider_uuid": "542df8ed-9be2-49b9-b4db-6d3183ff8ec8", "root_provider_uuid": "542df8ed-9be2-49b9-b4db-6d3183ff8ec8" } ././@PaxHeader0000000000000000000000000000021300000000000011451 xustar0000000000000000117 path=openstack_placement-13.0.0/api-ref/source/samples/resource_providers/create-resource_providers-request.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_providers/create-resource_providers-reque0000664000175000017500000000022400000000000034275 0ustar00zuulzuul00000000000000{ "name": "NFS Share", "uuid": "7d2590ae-fb85-4080-9306-058b4c915e3f", "parent_provider_uuid": "542df8ed-9be2-49b9-b4db-6d3183ff8ec8" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_providers/get-resource_provider.json0000664000175000017500000000220700000000000033262 0ustar00zuulzuul00000000000000{ "generation": 0, "links": [ { "href": "/placement/resource_providers/3b4005be-d64b-456f-ba36-0ffd02718868", "rel": "self" }, { "href": "/placement/resource_providers/3b4005be-d64b-456f-ba36-0ffd02718868/aggregates", "rel": "aggregates" }, { "href": "/placement/resource_providers/3b4005be-d64b-456f-ba36-0ffd02718868/inventories", "rel": "inventories" }, { "href": "/placement/resource_providers/3b4005be-d64b-456f-ba36-0ffd02718868/usages", "rel": "usages" }, { "href": "/placement/resource_providers/3b4005be-d64b-456f-ba36-0ffd02718868/traits", "rel": "traits" }, { "href": "/placement/resource_providers/3b4005be-d64b-456f-ba36-0ffd02718868/allocations", "rel": "allocations" } ], "name": "Ceph Storage Pool", "uuid": "3b4005be-d64b-456f-ba36-0ffd02718868", "parent_provider_uuid": "542df8ed-9be2-49b9-b4db-6d3183ff8ec8", "root_provider_uuid": "542df8ed-9be2-49b9-b4db-6d3183ff8ec8" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_providers/get-resource_providers.json0000664000175000017500000000420500000000000033445 0ustar00zuulzuul00000000000000{ "resource_providers": [ { "generation": 1, "uuid": "99c09379-6e52-4ef8-9a95-b9ce6f68452e", "links": [ { "href": "/resource_providers/99c09379-6e52-4ef8-9a95-b9ce6f68452e", "rel": "self" }, { "href": "/resource_providers/99c09379-6e52-4ef8-9a95-b9ce6f68452e/aggregates", "rel": "aggregates" }, { "href": "/resource_providers/99c09379-6e52-4ef8-9a95-b9ce6f68452e/inventories", "rel": "inventories" }, { "href": "/resource_providers/99c09379-6e52-4ef8-9a95-b9ce6f68452e/usages", "rel": "usages" }, { "href": "/resource_providers/99c09379-6e52-4ef8-9a95-b9ce6f68452e/traits", "rel": "traits" }, { "href": "/resource_providers/99c09379-6e52-4ef8-9a95-b9ce6f68452e/allocations", "rel": "allocations" } ], "name": "vgr.localdomain", "parent_provider_uuid": "542df8ed-9be2-49b9-b4db-6d3183ff8ec8", "root_provider_uuid": "542df8ed-9be2-49b9-b4db-6d3183ff8ec8" }, { "generation": 2, "uuid": "d0b381e9-8761-42de-8e6c-bba99a96d5f5", "links": [ { "href": "/resource_providers/d0b381e9-8761-42de-8e6c-bba99a96d5f5", "rel": "self" }, { "href": "/resource_providers/d0b381e9-8761-42de-8e6c-bba99a96d5f5/aggregates", "rel": "aggregates" }, { "href": "/resource_providers/d0b381e9-8761-42de-8e6c-bba99a96d5f5/inventories", "rel": "inventories" }, { "href": "/resource_providers/d0b381e9-8761-42de-8e6c-bba99a96d5f5/usages", "rel": "usages" }, { "href": "/resource_providers/d0b381e9-8761-42de-8e6c-bba99a96d5f5/traits", "rel": "traits" }, { "href": "/resource_providers/d0b381e9-8761-42de-8e6c-bba99a96d5f5/allocations", "rel": "allocations" } ], "name": "pony1", "parent_provider_uuid": null, "root_provider_uuid": "d0b381e9-8761-42de-8e6c-bba99a96d5f5" } ] } ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=openstack_placement-13.0.0/api-ref/source/samples/resource_providers/update-resource_provider-request.json 22 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_providers/update-resource_provider-reques0000664000175000017500000000015100000000000034313 0ustar00zuulzuul00000000000000 { "name": "Shared storage", "parent_provider_uuid": "542df8ed-9be2-49b9-b4db-6d3183ff8ec8" } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/resource_providers/update-resource_provider.json0000664000175000017500000000220400000000000033762 0ustar00zuulzuul00000000000000{ "generation": 0, "links": [ { "href": "/placement/resource_providers/33f26ae0-dbf2-485b-a24a-244d8280e29f", "rel": "self" }, { "href": "/placement/resource_providers/33f26ae0-dbf2-485b-a24a-244d8280e29f/aggregates", "rel": "aggregates" }, { "href": "/placement/resource_providers/33f26ae0-dbf2-485b-a24a-244d8280e29f/inventories", "rel": "inventories" }, { "href": "/placement/resource_providers/33f26ae0-dbf2-485b-a24a-244d8280e29f/usages", "rel": "usages" }, { "href": "/placement/resource_providers/33f26ae0-dbf2-485b-a24a-244d8280e29f/traits", "rel": "traits" }, { "href": "/placement/resource_providers/33f26ae0-dbf2-485b-a24a-244d8280e29f/allocations", "rel": "allocations" } ], "name": "Shared storage", "uuid": "33f26ae0-dbf2-485b-a24a-244d8280e29f", "parent_provider_uuid": "542df8ed-9be2-49b9-b4db-6d3183ff8ec8", "root_provider_uuid": "d0b381e9-8761-42de-8e6c-bba99a96d5f5" } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/api-ref/source/samples/root/0000775000175000017500000000000000000000000023107 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/root/get-root.json0000664000175000017500000000047700000000000025552 0ustar00zuulzuul00000000000000{ "versions" : [ { "min_version" : "1.0", "id" : "v1.0", "max_version" : "1.28", "status": "CURRENT", "links": [ { "href": "", "rel": "self" } ] } ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/api-ref/source/samples/traits/0000775000175000017500000000000000000000000023432 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/traits/get-traits.json0000664000175000017500000000017400000000000026412 0ustar00zuulzuul00000000000000{ "traits": [ "CUSTOM_HW_FPGA_CLASS1", "CUSTOM_HW_FPGA_CLASS2", "CUSTOM_HW_FPGA_CLASS3" ] } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/api-ref/source/samples/usages/0000775000175000017500000000000000000000000023413 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/usages/get-usages-1.38.json0000664000175000017500000000064700000000000026750 0ustar00zuulzuul00000000000000{ "usages" : { "INSTANCE" : { "consumer_count" : 5, "MEMORY_MB" : 512, "VCPU" : 2, "DISK_GB" : 5 }, "MIGRATION" : { "DISK_GB" : 5, "VCPU" : 2, "consumer_count" : 2, "MEMORY_MB" : 512 }, "unknown" : { "VCPU" : 2, "DISK_GB" : 5, "consumer_count" : 1, "MEMORY_MB" : 512 } } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/samples/usages/get-usages.json0000664000175000017500000000013400000000000026350 0ustar00zuulzuul00000000000000{ "usages": { "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/traits.inc0000664000175000017500000000566200000000000022472 0ustar00zuulzuul00000000000000====== Traits ====== Traits are *qualitative* characteristics of resource providers. The classic example for traits can be requesting disk from different providers: a user may request 80GiB of disk space for an instance (quantitative), but may also expect that the disk be SSD instead of spinning disk (qualitative). Traits provide a way to mark that a storage provider is SSD or spinning. .. note:: Traits API requests are available starting from version 1.6. List traits =========== Return a list of valid trait strings according to parameters specified. .. rest_method:: GET /traits Normal Response Codes: 200 Request ------- Several query parameters are available to filter the returned list of traits. If multiple different parameters are provided, the results of all filters are merged with a boolean `AND`. .. rest_parameters:: parameters.yaml - name: trait_name_query - associated: trait_associated Response -------- .. rest_parameters:: parameters.yaml - traits: traits Response Example ---------------- .. literalinclude:: ./samples/traits/get-traits.json :language: javascript Show traits =========== Check if a trait name exists in this cloud. .. rest_method:: GET /traits/{name} Normal Response Codes: 204 Error response codes: itemNotFound(404) Request ------- .. rest_parameters:: parameters.yaml - name: trait_name Response -------- No body content is returned on a successful GET. Update traits ============= Insert a new custom trait. If traits already exists 204 will be returned. There are two kinds of traits: the standard traits and the custom traits. The standard traits are interoperable across different OpenStack cloud deployments. The definition of standard traits comes from the `os-traits` library. The standard traits are read-only in the placement API which means that the user can't modify any standard traits through API. The custom traits are used by admin users to manage the non-standard qualitative information of resource providers. .. rest_method:: PUT /traits/{name} Normal Response Codes: 201, 204 Error response codes: badRequest(400) * `400 BadRequest` if trait name is not prefixed with `CUSTOM_` prefix. Request ------- .. rest_parameters:: parameters.yaml - name: trait_name Response -------- .. rest_parameters:: parameters.yaml - Location: location No body content is returned on a successful PUT. Delete traits ============= Delete the trait specified be `{name}`. Note that only custom traits can be deleted. .. rest_method:: DELETE /traits/{name} Normal Response Codes: 204 Error response codes: badRequest(400), itemNotFound(404), conflict(409) * `400 BadRequest` if the name to delete is standard trait. * `404 Not Found` if no such trait exists. * `409 Conflict` if the name to delete has associations with any ResourceProvider. Request ------- .. rest_parameters:: parameters.yaml - name: trait_name Response -------- No body content is returned on a successful DELETE. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/api-ref/source/usages.inc0000664000175000017500000000261700000000000022450 0ustar00zuulzuul00000000000000====== Usages ====== Represent the consumption of resources for a project and user. .. note:: Usages API requests are available starting from version 1.9. List usages =========== Return a report of usage information for resources associated with the project identified by `project_id` and user identified by `user_id`. The value is a dictionary of resource classes paired with the sum of the allocations of that resource class for provided parameters. .. rest_method:: GET /usages Normal Response Codes: 200 Error response codes: badRequest(400) Request ------- .. rest_parameters:: parameters.yaml - project_id: project_id - user_id: user_id - consumer_type: consumer_type_req Response (microversions 1.38 - ) -------------------------------- .. rest_parameters:: parameters.yaml - usages.consumer_type: consumer_type - usages.consumer_type.consumer_count: consumer_count - usages.consumer_type.RESOURCE_CLASS: resources_single Response Example (microversions 1.38 - ) ---------------------------------------- .. literalinclude:: ./samples/usages/get-usages-1.38.json :language: javascript Response (microversions 1.9 - 1.36) ----------------------------------- .. rest_parameters:: parameters.yaml - usages: resources Response Example (microversions 1.9 - 1.36) ------------------------------------------- .. literalinclude:: ./samples/usages/get-usages.json :language: javascript ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/bindep.txt0000664000175000017500000000254300000000000017643 0ustar00zuulzuul00000000000000# This is a cross-platform list tracking distribution packages needed for install and tests; # see https://docs.openstack.org/infra/bindep/ for additional information. build-essential [platform:dpkg test] gcc [platform:rpm test] # gettext and graphviz are needed by doc builds only. For transition, # have them in both doc and test. # TODO(jaegerandi): Remove test once infra scripts are updated. gettext [doc test] graphviz [doc test] language-pack-en [platform:ubuntu] libffi-dev [platform:dpkg test] libffi-devel [platform:rpm test] libmysqlclient-dev [platform:ubuntu] libmariadb-dev-compat [platform:debian] libpq-dev [platform:dpkg test] libsqlite3-dev [platform:dpkg test] libxml2-dev [platform:dpkg test] libxslt-devel [platform:rpm test] libxslt1-dev [platform:dpkg test] locales [platform:debian] mysql [platform:rpm] mysql-client [platform:dpkg !platform:debian] mysql-devel [platform:rpm test] mysql-server [!platform:debian] mariadb-server [platform:debian] pkg-config [platform:dpkg test] pkgconfig [platform:rpm test] postgresql postgresql-client [platform:dpkg] postgresql-devel [platform:rpm test] postgresql-server [platform:rpm] python3-all [platform:dpkg test] python3-all-dev [platform:dpkg test] python3 [platform:rpm test] python3-devel [platform:rpm test] sqlite-devel [platform:rpm test] libpcre3-dev [platform:dpkg test] pcre-devel [platform:rpm test] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/doc/0000775000175000017500000000000000000000000016402 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/README.rst0000664000175000017500000000055500000000000020076 0ustar00zuulzuul00000000000000OpenStack Placement Documentation README ======================================== Configuration, contributor, install, and usage documentation is sourced here and built to: https://docs.openstack.org/placement/latest/ Note that the Placement API reference is maintained under the ``/api-ref`` directory and built to: https://docs.openstack.org/api-ref/placement/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/requirements.txt0000664000175000017500000000060000000000000021662 0ustar00zuulzuul00000000000000sphinx>=2.1.1 # BSD sphinxcontrib-actdiag>=0.8.5 # BSD sphinxcontrib-seqdiag>=0.8.4 # BSD sphinx-feature-classification>=0.2.0 # Apache-2.0 os-api-ref>=1.4.0 # Apache-2.0 openstackdocstheme>=2.2.1 # Apache-2.0 # releasenotes reno>=3.1.0 # Apache-2.0 # redirect tests in docs whereto>=0.3.0 # Apache-2.0 # needed to generate osprofiler config options osprofiler>=1.4.0 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/doc/source/0000775000175000017500000000000000000000000017702 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/doc/source/_extra/0000775000175000017500000000000000000000000021164 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/_extra/.htaccess0000664000175000017500000000141300000000000022761 0ustar00zuulzuul00000000000000redirectmatch 301 ^/placement/([^/]+)/specs/train/approved/2005297-negative-aggregate-membership.html /placement/$1/specs/train/implemented/2005297-negative-aggregate-membership.html redirectmatch 301 ^/placement/([^/]+)/specs/train/approved/placement-resource-provider-request-group-mapping-in-allocation-candidates.html /placement/$1/specs/train/implemented/placement-resource-provider-request-group-mapping-in-allocation-candidates.html redirectmatch 301 ^/placement/([^/]+)/specs/train/approved/2005575-nested-magic-1.html /placement/$1/specs/train/implemented/2005575-nested-magic-1.html redirectmatch 301 ^/placement/([^/]+)/usage/index.html /placement/$1/user/index.html redirectmatch 301 ^/placement/([^/]+)/usage/provider-tree.html /placement/$1/user/provider-tree.html ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/doc/source/_static/0000775000175000017500000000000000000000000021330 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/_static/.placeholder0000664000175000017500000000025600000000000023616 0ustar00zuulzuul00000000000000Sphinx 2.2.0 gets upset when a directory it is configured for does not exist. This directory is only used for automatically generated configuration and policy sample files. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/doc/source/admin/0000775000175000017500000000000000000000000020772 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/admin/index.rst0000664000175000017500000000120600000000000022632 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Upgrade ======= .. toctree:: :maxdepth: 2 upgrade-notes ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/admin/upgrade-notes.rst0000664000175000017500000000161300000000000024302 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============= Upgrade Notes ============= This section provide notes on upgrading to a given target release. .. note:: As a reminder, the :ref:`placement-status upgrade check ` tool can be used to help determine the status of your deployment and how ready it is to perform an upgrade. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2327778 openstack_placement-13.0.0/doc/source/cli/0000775000175000017500000000000000000000000020451 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/cli/index.rst0000664000175000017500000000141700000000000022315 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Command-line Utilities ====================== In this section you will find information on placement's command line utilities: .. toctree:: :maxdepth: 1 placement-manage placement-status ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/cli/placement-manage.rst0000664000175000017500000001052100000000000024400 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================ placement-manage ================ Synopsis ======== :: placement-manage Description =========== :program:`placement-manage` is used to perform administrative tasks with the placement service. It is designed for use by operators and deployers. Options ======= The standard pattern for executing a ``placement-manage`` command is:: placement-manage [-h] [--config-dir DIR] [--config-file PATH] [] Run without arguments to see a list of available command categories:: placement-manage You can also run with a category argument such as ``db`` to see a list of all commands in that category:: placement-manage db Configuration options (for example the ``[placement_database]/connection`` URL) are by default found in a file at ``/etc/placement/placement.conf``. The ``config-dir`` and ``config-file`` arguments may be used to select a different file. The following sections describe the available categories and arguments for placement-manage. Placement Database ~~~~~~~~~~~~~~~~~~ ``placement-manage db version`` Print the current database version. ``placement-manage db sync`` Upgrade the database schema to the most recent version. The local database connection is determined by ``[placement_database]/connection`` in the configuration file used by placement-manage. If the ``connection`` option is not set, the command will fail. The defined database must already exist. ``placement-manage db stamp `` Stamp the revision table with the given revision; don’t run any migrations. This can be used when the database already exists and you want to bring it under alembic control. ``placement-manage db online_data_migrations [--max-count]`` Perform data migration to update all live data. ``--max-count`` controls the maximum number of objects to migrate in a given call. If not specified, migration will occur in batches of 50 until fully complete. Returns exit code 0 if no (further) updates are possible, 1 if the ``--max-count`` option was used and some updates were completed successfully (even if others generated errors), 2 if some updates generated errors and no other migrations were able to take effect in the last batch attempted, or 127 if invalid input is provided (e.g. non-numeric max-count). This command should be called after upgrading database schema and placement services on all controller nodes. If it exits with partial updates (exit status 1) it should be called again, even if some updates initially generated errors, because some updates may depend on others having completed. If it exits with status 2, intervention is required to resolve the issue causing remaining updates to fail. It should be considered successfully completed only when the exit status is 0. For example:: $ placement-manage db online_data_migrations Running batches of 50 until complete 2 rows matched query create_incomplete_consumers, 2 migrated +---------------------------------------------+-------------+-----------+ | Migration | Total Found | Completed | +---------------------------------------------+-------------+-----------+ | set_root_provider_ids | 0 | 0 | | create_incomplete_consumers | 2 | 2 | +---------------------------------------------+-------------+-----------+ In the above example, the ``create_incomplete_consumers`` migration found two candidate records which required a data migration. Since ``--max-count`` defaults to 50 and only two records were migrated with no more candidates remaining, the command completed successfully with exit code 0. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/cli/placement-status.rst0000664000175000017500000000521500000000000024477 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================ placement-status ================ Synopsis ======== :: placement-status [] Description =========== :program:`placement-status` is a tool that provides routines for checking the status of a Placement deployment. Options ======= The standard pattern for executing a :program:`placement-status` command is:: placement-status [] Run without arguments to see a list of available command categories:: placement-status Categories are: * ``upgrade`` Detailed descriptions are below. You can also run with a category argument such as ``upgrade`` to see a list of all commands in that category:: placement-status upgrade These sections describe the available categories and arguments for :program:`placement-status`. Upgrade ~~~~~~~ .. _placement-status-checks: ``placement-status upgrade check`` Performs a release-specific readiness check before restarting services with new code. This command expects to have complete configuration and access to databases and services. **Return Codes** .. list-table:: :widths: 20 80 :header-rows: 1 * - Return code - Description * - 0 - All upgrade readiness checks passed successfully and there is nothing to do. * - 1 - At least one check encountered an issue and requires further investigation. This is considered a warning but the upgrade may be OK. * - 2 - There was an upgrade status check failure that needs to be investigated. This should be considered something that stops an upgrade. * - 255 - An unexpected error occurred. **History of Checks** **1.0.0 (Stein)** * Checks were added for incomplete consumers and missing root provider ids both of which can be remedied by running the ``placement-manage db online_data_migrations`` command. **2.0.0 (Train)** * The ``Missing Root Provider IDs`` upgrade check will now result in a failure if there are still ``resource_providers`` records with a null ``root_provider_id`` value. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/conf.py0000664000175000017500000001070200000000000021201 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # placement documentation build configuration file # # Refer to the Sphinx documentation for advice on configuring this file: # # http://www.sphinx-doc.org/en/stable/config.html import os import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../')) sys.path.insert(0, os.path.abspath('../')) sys.path.insert(0, os.path.abspath('./')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # TODO(efried): Trim this moar extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'openstackdocstheme', 'sphinx.ext.coverage', 'sphinx.ext.graphviz', 'sphinx_feature_classification.support_matrix', 'oslo_config.sphinxconfiggen', 'oslo_config.sphinxext', 'oslo_policy.sphinxpolicygen', 'oslo_policy.sphinxext', 'sphinxcontrib.actdiag', 'sphinxcontrib.seqdiag', ] # openstackdocstheme options openstackdocs_repo_name = 'openstack/placement' openstackdocs_pdf_link = True openstackdocs_use_storyboard = True config_generator_config_file = '../../etc/placement/config-generator.conf' sample_config_basename = '_static/placement' policy_generator_config_file = [ ('../../etc/placement/policy-generator.conf', '_static/placement') ] actdiag_html_image_format = 'SVG' actdiag_antialias = True seqdiag_html_image_format = 'SVG' seqdiag_antialias = True todo_include_todos = True # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. copyright = '2010-present, OpenStack Foundation' # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = False # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['placement.'] # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'openstackdocs' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] html_extra_path = ['_extra'] # -- Options for LaTeX output ------------------------------------------------- # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'doc-placement.tex', 'Placement Documentation', 'OpenStack Foundation', 'manual'), ] latex_domain_indices = False latex_elements = { 'makeindex': '', 'printindex': '', 'preamble': r'\setcounter{tocdepth}{3}', 'maxlistdepth': '10', } # Disable usage of xindy https://bugzilla.redhat.com/show_bug.cgi?id=1643664 latex_use_xindy = False # Disable smartquotes, they don't work in latex smartquotes_excludes = {'builders': ['latex']} # -- Options for openstackdocstheme ------------------------------------------- # keep this ordered to keep mriedem happy openstackdocs_projects = [ 'neutron', 'nova', 'oslo.versionedobjects', 'placement', 'reno', ] ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2367778 openstack_placement-13.0.0/doc/source/configuration/0000775000175000017500000000000000000000000022551 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/configuration/config.rst0000664000175000017500000000045500000000000024554 0ustar00zuulzuul00000000000000===================== Configuration Options ===================== The following is an overview of all available configuration options in Placement. For a sample configuration file, refer to :doc:`/configuration/sample-config`. .. show-options:: :config-file: etc/placement/config-generator.conf ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/configuration/index.rst0000664000175000017500000000243400000000000024415 0ustar00zuulzuul00000000000000=================== Configuration Guide =================== The static configuration for Placement lives in two main files: ``placement.conf`` and ``policy.yaml``. These are described below. Configuration ------------- * :doc:`Config Reference `: A complete reference of all configuration options available in the ``placement.conf`` file. * :doc:`Sample Config File `: A sample config file with inline documentation. .. TODO(efried):: Get this working * :nova-doc:`Configuration Guide `: Detailed configuration guides for various parts of you Nova system. Helpful reference for setting up specific hypervisor backends. Policy ------ Placement, like most OpenStack projects, uses a policy language to restrict permissions on REST API actions. * :doc:`Policy Reference `: A complete reference of all policy points in placement and what they impact. * :doc:`Sample Policy File `: A sample placement policy file with inline documentation. .. # NOTE(mriedem): This is the section where we hide things that we don't # actually want in the table of contents but sphinx build would fail if # they aren't in the toctree somewhere. .. toctree:: :hidden: policy sample-policy config sample-config ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/configuration/policy.rst0000664000175000017500000000113700000000000024604 0ustar00zuulzuul00000000000000================== Placement Policies ================== .. warning:: JSON formatted policy file is deprecated since Placement 5.0.0 (Wallaby). The `oslopolicy-convert-json-to-yaml`__ tool will migrate your existing JSON-formatted policy file to YAML in a backward-compatible way. .. __: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html The following is an overview of all available policies in Placement. For a sample configuration file, refer to :doc:`/configuration/sample-policy`. .. show-policy:: :config-file: etc/placement/policy-generator.conf ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/configuration/sample-config.rst0000664000175000017500000000115500000000000026031 0ustar00zuulzuul00000000000000========================= Sample Configuration File ========================= The following is a sample Placement configuration for adaptation and use. For a detailed overview of all available configuration options, refer to :doc:`/configuration/config`. The sample configuration can also be viewed in :download:`file form `. .. important:: The sample configuration file is auto-generated from placement when this documentation is built. You must ensure your version of placement matches the version of this documentation. .. literalinclude:: /_static/placement.conf.sample ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/configuration/sample-policy.rst0000664000175000017500000000152600000000000026065 0ustar00zuulzuul00000000000000============================ Sample Placement Policy File ============================ .. warning:: JSON formatted policy file is deprecated since Placement 5.0.0 (Wallaby). The `oslopolicy-convert-json-to-yaml`__ tool will migrate your existing JSON-formatted policy file to YAML in a backward-compatible way. .. __: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html The following is a sample placement policy file for adaptation and use. The sample policy can also be viewed in :download:`file form `. .. important:: The sample policy file is auto-generated from placement when this documentation is built. You must ensure your version of placement matches the version of this documentation. .. literalinclude:: /_static/placement.policy.yaml.sample ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2367778 openstack_placement-13.0.0/doc/source/contributor/0000775000175000017500000000000000000000000022254 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/contributor/api-ref-guideline.rst0000664000175000017500000001234500000000000026301 0ustar00zuulzuul00000000000000======================= API reference guideline ======================= The API reference should be updated when placement APIs are modified (microversion is bumped, etc.). This page describes the guideline for updating the API reference. API reference ============= * `Placement API reference `_ The guideline to write the API reference ======================================== The API reference consists of the following files. * API reference text: ``api-ref/source/*.inc`` * Parameter definition: ``api-ref/source/parameters.yaml`` * JSON request/response samples: ``api-ref/source/samples/*`` Structure of inc file --------------------- Each REST API is described in the text file (\*.inc). The structure of inc file is as follows: - Title - API Name - REST Method - URL - Description - Normal status code - Error status code - Request - Parameters - JSON request body example (if exists) - Response - Parameters - JSON response body example (if exists) - API Name (Next) - ... REST Method ----------- The guideline for describing HTTP methods is described in this section. All supported methods by resource should be listed in the API reference. The order of methods ~~~~~~~~~~~~~~~~~~~~ Methods have to be sorted by each URI in the following order: 1. GET 2. POST 3. PUT 4. PATCH (unused by Nova) 5. DELETE And sorted from broadest to narrowest. So for /severs it would be: 1. GET /servers 2. POST /servers 3. GET /servers/details 4. GET /servers/{server_id} 5. PUT /servers/{server_id} 6. DELETE /servers/{server_id} Method titles spelling and case ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The spelling and the case of method names in the title have to match what is in the code. For instance, the title for the section on method "Get Rdp Console" should be "Get Rdp Console (os-getRDPConsole Action)" NOT "Get Rdp Console (Os-Getrdpconsole Action)" Response codes ~~~~~~~~~~~~~~ The normal response codes (20x) and error response codes have to be listed. The order of response codes should be in ascending order. The description of typical error response codes are as follows: .. list-table:: Error response codes :header-rows: 1 * - Response codes - Description * - 400 - badRequest(400) * - 401 - unauthorized(401) * - 403 - forbidden(403) * - 404 - itemNotFound(404) * - 409 - conflict(409) * - 410 - gone(410) * - 501 - notImplemented(501) * - 503 - serviceUnavailable(503) Parameters ---------- Parameters need to be defined by 2 subsections. The one is in the 'Request' subsection, the other is in the 'Response' subsection. The queries, request headers and attributes go in the 'Request' subsection and response headers and attributes go in the 'Response' subsection. The order of parameters in each API ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The request and response parameters have to be listed in the following order in each API in the text file. 1. Header 2. Path 3. Query 4. Body a. Top level object (i.e. server) b. Required fields c. Optional fields d. Parameters added in microversions (by the microversion they were added) Parameter type ~~~~~~~~~~~~~~ The parameters are defined in the parameter file (``parameters.yaml``). The type of parameters have to be one of followings: * ``array`` It is a list. * ``boolean`` * ``float`` * ``integer`` * ``none`` The value is always ``null`` in a response or should be ``null`` in a request. * ``object`` The value is dict. * ``string`` If the value can be specified by multiple types, specify one type in the file and mention the other types in the description. Required or optional ~~~~~~~~~~~~~~~~~~~~ In the parameter file, define the ``required`` field in each parameter. .. list-table:: :widths: 15 85 * - ``true`` - The parameter must be specified in the request, or the parameter always appears in the response. * - ``false`` - It is not always necessary to specify the parameter in the request, or the parameter does not appear in the response in some cases. e.g. A config option defines whether the parameter appears in the response or not. A parameter appears when administrators call but does not appear when non-admin users call. If a parameter must be specified in the request or always appears in the response in the micoversion added or later, the parameter must be defined as required (``true``). The order of parameters in the parameter file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The order of parameters in the parameter file has to be kept as follows: 1. By in type a. Header b. Path c. Query d. Body 2. Then alphabetical by name Example ------- .. TODO:: The guideline for request/response JSON bodies should be added. Body ---- .. TODO:: The guideline for the introductory text and the context for the resource in question should be added. Reference ========= * `The description for Parameters whose values are null `_ * `The definition of "Optional" parameter `_ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/contributor/architecture.rst0000664000175000017500000004163400000000000025500 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============== Architecture ============== The placement service is straightforward: It is a `WSGI`_ application that sends and receives JSON, using an RDBMS (usually MySQL) for persistence. As state is managed solely in the DB, scaling the placement service is done by increasing the number of WSGI application instances and scaling the RDBMS using traditional database scaling techniques. For sake of consistency and because there was initially intent to make the entities in the placement service available over RPC, :oslo.versionedobjects-doc:`versioned objects <>` were used to provide the interface between the HTTP application layer and the SQLAlchemy-driven persistence layer. In the Stein release, that interface was refactored to remove the use of versioned objects and split functionality into smaller modules. Though the placement service does not aspire to be a *microservice* it does aspire to continue to be small and minimally complex. This means a relatively small amount of middleware that is not configurable, and a limited number of exposed resources where any given resource is represented by one (and only one) URL that expresses a noun that is a member of the system. Adding additional resources should be considered a significant change requiring robust review from many stakeholders. The set of HTTP resources represents a concise and constrained grammar for expressing the management of resource providers, inventories, resource classes, traits, and allocations. If a solution is initially designed to need more resources or a more complex grammar that may be a sign that we need to give our goals greater scrutiny. Is there a way to do what we want with what we have already? Can some other service help? Is a new collaborating service required? Minimal Framework ================= The API is set up to use a minimal framework that tries to keep the structure of the application as discoverable as possible and keeps the HTTP interaction near the surface. The goal of this is to make things easy to trace when debugging or adding functionality. Functionality which is required for every request is handled in raw WSGI middleware that is composed in the ``placement.deploy`` module. Dispatch or routing is handled declaratively via the ``ROUTE_DECLARATIONS`` map defined in the ``placement.handler`` module. Mapping is by URL plus request method. The destination is a complete WSGI application, using a subclass of the `wsgify`_ method from `WebOb`_ to provide a `Request`_ object that provides convenience methods for accessing request headers, bodies, and query parameters and for generating responses. In the placement API these mini-applications are called *handlers*. The ``wsgify`` subclass is provided in ``placement.wsgi_wrapper`` as ``PlacementWsgify``. It is used to make sure that JSON formatted error responses are structured according to the API-SIG `errors`_ guideline. This division between middleware, dispatch and handlers is supposed to provide clues on where a particular behavior or functionality should be implemented. Like most such systems, this does not always work but is a useful tool. .. _microversion process: Microversions ============= The placement API makes use of `microversions`_ to allow the release of new features on an opt in basis. See :doc:`/index` for an up to date history of the available microversions. The rules around when a microversion is needed are modeled after those of the :nova-doc:`compute API `. When adding a new microversion there are a few bits of required housekeeping that must be done in the code: * Update the ``VERSIONS`` list in ``placement/microversion.py`` to indicate the new microversion and give a very brief summary of the added feature. * Update ``placement/rest_api_version_history.rst`` to add a more detailed section describing the new microversion. * Add a :reno-doc:`release note <>` with a ``features`` section announcing the new or changed feature and the microversion. * If the ``version_handler`` decorator (see below) has been used, increment ``TOTAL_VERSIONED_METHODS`` in ``placement/tests/unit/test_microversion.py``. This provides a confirmatory check just to make sure you are paying attention and as a helpful reminder to do the other things in this list. * Include functional gabbi tests as appropriate (see :doc:`testing`). At the least, update the ``latest microversion`` test in ``placement/tests/functional/gabbits/microversion.yaml``. * Update the `API Reference`_ documentation as appropriate. The source is located under ``api-ref/source/``. * If a new error code has been added in ``placement/errors.py``, it should be added to the `API Reference`_. In the placement API, microversions only use the modern form of the version header:: OpenStack-API-Version: placement 1.2 If a valid microversion is present in a request it will be placed, as a ``Version`` object, into the WSGI environment with the ``placement.microversion`` key. Often, accessing this in handler code directly (to control branching) is the most explicit and granular way to have different behavior per microversion. A ``Version`` instance can be treated as a tuple of two ints and compared as such or there is a ``matches`` method. A ``version_handler`` decorator is also available. It makes it possible to have multiple different handler methods of the same (fully-qualified by package) name, each available for a different microversion window. If a request wants a microversion that is not available, a defined status code is returned (usually ``404`` or ``405``). There is a unit test in place which will fail if there are version intersections. Adding a New Handler ==================== Adding a new URL or a new method (e.g, ``PATCH``) to an existing URL requires adding a new handler function. In either case a new microversion and release note is required. When adding an entirely new route a request for a lower microversion should return a ``404``. When adding a new method to an existing URL a request for a lower microversion should return a ``405``. In either case, the ``ROUTE_DECLARATIONS`` dictionary in the ``placement.handler`` module should be updated to point to a function within a module that contains handlers for the type of entity identified by the URL. Collection and individual entity handlers of the same type should be in the same module. As mentioned above, the handler function should be decorated with ``@wsgi_wrapper.PlacementWsgify``, take a single argument ``req`` which is a WebOb `Request`_ object, and return a WebOb `Response`_. For ``PUT`` and ``POST`` methods, request bodies are expected to be JSON based on a content-type of ``application/json``. This may be enforced by using a decorator: ``@util.require_content('application/json')``. If the body is not JSON, a ``415`` response status is returned. Response bodies are usually JSON. A handler can check the ``Accept`` header provided in a request using another decorator: ``@util.check_accept('application/json')``. If the header does not allow JSON, a ``406`` response status is returned. If a handler returns a response body, a ``Last-Modified`` header should be included with the response. If the entity or entities in the response body are directly associated with an object (or objects, in the case of a collection response) that has an ``updated_at`` (or ``created_at``) field, that field's value can be used as the value of the header (WebOb will take care of turning the datetime object into a string timestamp). A ``util.pick_last_modified`` is available to help choose the most recent last-modified when traversing a collection of entities. If there is no directly associated object (for example, the output is the composite of several objects) then the ``Last-Modified`` time should be ``timeutils.utcnow(with_timezone=True)`` (the timezone must be set in order to be a valid HTTP timestamp). For example, the response__ to ``GET /allocation_candidates`` should have a last-modified header of now because it is composed from queries against many different database entities, presents a mixture of result types (allocation requests and provider summaries), and has a view of the system that is only meaningful *now*. __ https://docs.openstack.org/api-ref/placement/#list-allocation-candidates If a ``Last-Modified`` header is set, then a ``Cache-Control`` header with a value of ``no-cache`` must be set as well. This is to avoid user-agents inadvertently caching the responses. JSON sent in a request should be validated against a JSON Schema. A ``util.extract_json`` method is available. This takes a request body and a schema. If multiple schema are used for different microversions of the same request, the caller is responsible for selecting the right one before calling ``extract_json``. When a handler needs to read or write the data store it should use methods on the objects found in the ``placement.objects`` package. Doing so requires a context which is provided to the handler method via the WSGI environment. It can be retrieved as follows:: context = req.environ['placement.context'] .. note:: If your change requires new methods or new objects in the ``placement.objects`` package, after you have made sure that you really do need those new methods or objects (you may not!) make those changes in a patch that is separate from and prior to the HTTP API change. If a handler needs to return an error response, with the advent of `Placement API Error Handling`_, it is possible to include a code in the JSON error response. This can be used to distinguish different errors with the same HTTP response status code (a common case is a generation conflict versus an inventory in use conflict). Error codes are simple namespaced strings (e.g., ``placement.inventory.inuse``) for which symbols are maintained in ``placement.errors``. Adding a symbol to a response is done by using the ``comment`` kwarg to a WebOb exception, like this:: except exception.InventoryInUse as exc: raise webob.exc.HTTPConflict( _('update conflict: %(error)s') % {'error': exc}, comment=errors.INVENTORY_INUSE) Code that adds newly raised exceptions should include an error code. Find additional guidelines on use in the docs for ``placement.errors``. When a new error code is added, also document it in the `API Reference`_. Testing of handler code is described in :doc:`testing`. Database Schema Changes ======================= At some point in every application's life it becomes necessary to change the structure of its database. Modifying the SQLAlchemy models (in placement/db/sqlachemy/models.py) is necessary for the application to understand the new structure, but that will not change the actual underlying database. To do that, Placement uses ``alembic`` to run database migrations. Alembic calls each change a **revision**. To create a migration with alembic, run the ``alembic revision`` command. Alembic will then generate a new revision file with a unique file name, and place it in the ``alembic/versions/`` directory: .. code-block:: console ed@devenv:~/projects/placement$ alembic -c placement/db/sqlalchemy/alembic.ini revision -m "Add column foo to bar table" Generating /home/ed/projects/placement/placement/db/sqlalchemy/alembic/versions/dfb006498ad2_add_column_foo_to_bar_table.py ... done Let us break down that command: - The **-c** parameter tells alembic where to find its configuration file. - **revision** is the alembic subcommand for creating a new revision file. - The **-m** parameter specifies a brief comment explaining the change. - The generated file from alembic will have a name consisting of a random hash prefix, followed by an underscore, followed by your **-m** comment, and a **.py** extension. So be sure to keep your comment as brief as possible while still being descriptive. The generated file will look something like this: .. code-block:: python """Add column foo to bar table Revision ID: dfb006498ad2 Revises: 0378df171af3 Create Date: 2018-10-29 20:02:58.290779 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'dfb006498ad2' down_revision = '0378df171af3' branch_labels = None depends_on = None def upgrade(): pass The top of the file is the docstring that will show when you review your revision history. If we did not include the **-m** comment when we ran the ``alembic revision`` command, this would just contain "empty message". If you did not specify the comment when creating the file, be sure to replace "empty message" with a brief comment describing the reason for the database change. You then need to define the changes in the ``upgrade()`` method. The code used in these methods is basic SQLAlchemy code for creating and modifying tables. You can examine existing migrations in the project to see examples of what this code looks like, as well as find more in-depth usage of Alembic in the `Alembic tutorial`_. One other option when creating the revision is to add the ``--autogenerate`` parameter to the revision command. This assumes that you have already updated the SQLAlchemy models, and have a connection to the placement database configured. When run with this option, the ``upgrade()`` method of the revision file is filled in for you by alembic as it compares the schema described in your models.py script and the actual state of the database. You should always verify the revision script to make sure it does just what you intended, both by reading the code as well as running the tests, as there are some things that autogenerate cannot deduce. See `autogenerate limitations`_ for more detailed information. Gotchas ======= This section tries to shed some light on some of the differences between the placement API and some of the other OpenStack APIs or on situations which may be surprising or unexpected. * The placement API is somewhat more strict about ``Content-Type`` and ``Accept`` headers in an effort to follow the HTTP RFCs. If a user-agent sends some JSON in a ``PUT`` or ``POST`` request without a ``Content-Type`` of ``application/json`` the request will result in an error. If a ``GET`` request is made without an ``Accept`` header, the response will default to being ``application/json``. If a request is made with an explicit ``Accept`` header that does not include ``application/json`` then there will be an error and the error will attempt to be in the requested format (for example, ``text/plain``). * If a URL exists, but a request is made using a method that that URL does not support, the API will respond with a ``405`` error. Sometimes in the nova APIs this can be a ``404`` (which is wrong, but understandable given the constraints of the code). * Because each handler is individually wrapped by the ``PlacementWsgify`` decorator any exception that is a subclass of ``webob.exc.WSGIHTTPException`` that is raised from within the handler, such as ``webob.exc.HTTPBadRequest``, will be caught by WebOb and turned into a valid `Response`_ containing headers and body set by WebOb based on the information given when the exception was raised. It will not be seen as an exception by any of the middleware in the placement stack. In general this is a good thing, but it can lead to some confusion if, for example, you are trying to add some middleware that operates on exceptions. Other exceptions that are not from `WebOb`_ will raise outside the handlers where they will either be caught in the ``__call__`` method of the ``PlacementHandler`` app that is responsible for dispatch, or by the ``FaultWrap`` middleware. .. _WSGI: https://www.python.org/dev/peps/pep-3333/ .. _wsgify: http://docs.webob.org/en/latest/api/dec.html .. _WebOb: http://docs.webob.org/en/latest/ .. _Request: http://docs.webob.org/en/latest/reference.html#request .. _Response: http://docs.webob.org/en/latest/#response .. _microversions: http://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html .. _errors: http://specs.openstack.org/openstack/api-wg/guidelines/errors.html .. _API Reference: https://docs.openstack.org/api-ref/placement/ .. _Placement API Error Handling: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/placement-api-error-handling.html .. _`Alembic tutorial`: https://alembic.zzzcomputing.com/en/latest/tutorial.html .. _`autogenerate limitations`: https://alembic.zzzcomputing.com/en/latest/autogenerate.html#what-does-autogenerate-detect-and-what-does-it-not-detect ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/contributor/contributing.rst0000664000175000017500000003131700000000000025522 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================ So You Want to Contribute... ============================ For general information on contributing to OpenStack, please check out the `contributor guide `_ to get started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the basics of interacting with our Gerrit review system, how we communicate as a community, etc. Below will cover the more project specific information you need to get started with placement. Communication ------------- As an official OpenStack project, Placement follows the overarching processes outlined in the `Project Team Guide`_. Contribution is welcomed from any interested parties and takes many different forms. To make sure everything gets the attention it deserves and work is not duplicated there are some guidelines, stated here. If in doubt, ask someone, either by sending a message to the `openstack-discuss`_ mailing list with a ``[placement]`` subject tag, or by visiting the ``#openstack-nova`` IRC channel on ``webchat.oftc.net``. Submitting and Managing Bugs ---------------------------- Bugs found in placement should be reported in `Launchpad`_ by creating a new bug in the ``placement`` project. .. _new_bug: New Bugs ~~~~~~~~ If you are submitting a `new bug`_, explain the problem, the steps taken that led to the bad results, and the expected results. Please also add as much of the following information as possible: * Relevant lines from the ``placement-api`` log. * The OpenStack release (e.g., ``Stein``). * The method used to install or deploy placement. * The operating system(s) on which the placement is running. * The version(s) of Python being used. Tag the bug with ``tags``, like doc, api, etcetera. Learn more about launchpad from `openstack launchpad doc`_. .. _triage: Triaging Bugs ~~~~~~~~~~~~~ Triaging newly submitted bugs to confirm they really are bugs, gather missing information, and to suggest possible solutions is one of the most important contributions that can be made to any open source project. If a new bug doesn't have tags, add the relevant tag as per the area of affected code. Leave comments on the bug if you have questions or ideas. If you are relatively certain about a solution, add the steps of that solution as tasks on the bug. While triaging, only if you are sure, update the status of the bug from new to others. If submitting a change related to a bug, the `gerrit`_ system will automatically link to launchpad bug if you include ``bug_id:`` identifiers in your commit message, like this:: Related-Bug: 2005189 Partial-Bug: 2005190 Closes-Bug: 2005190 Reviewing Code -------------- Like other OpenStack projects, Placement uses `gerrit`_ to facilitate peer code review. It is hoped and expected that anyone who would like to commit code to the Placement project will also spend time reviewing code for the sake of the common good. The more people reviewing, the more code that will eventually merge. See `How to Review Changes the OpenStack Way`_ for an overview of the review and voting process. There is a small group of people within the Placement team called `core reviewers`_. These are people who are empowered to signal (via the ``+2`` vote) that code is of a suitable standard to be merged and is aligned with the current goals of the project. Core reviewers are regularly selected from all active reviewers based on the quantity and quality of their reviews and demonstrated understanding of the Placement code and goals of the project. The point of review is to evolve potentially useful code to merged working code that is aligned with the standards of style, testing, and correctness that we share as group. It is not for creating perfect code. Review should always be `constructive`_, encouraging, and friendly. People who contribute code are doing the project a favor, make it feel that way. Some guidelines that reviewers and patch submitters should be aware of: * It is very important that a new patch set gets some form of review as soon as possible, even if only to say "we've seen this". Latency in the review process has been identified as hugely discouraging for new and experienced contributors alike. * Follow up changes, to fix minor problems identified during review, are encouraged. We want to keep things moving. * As a reviewer, remember that not all patch submitters will know these guidelines. If it seems they don't, point them here and be patient in the meantime. * Gerrit can be good for code review, but is often not a great environment for having a discussion that is struggling to resolve to a decision. Move discussion to the mailing list sooner rather than later. Add a link to the thread in the `list archive`_ to the review. * If the CI system is throwing up random failures in test runs, you should endeavor whenever possible to investigate, not simply ``recheck``. A flakey gate is an indication that OpenStack is not robust and at the root of all this, making OpenStack work well is what we are doing. See here for `How to Recheck`_ Special Considerations For Core Reviewers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Core reviewers have special powers. With great power comes great responsibility and thus being held to a standard. As a core reviewer, your job is to enable other people to contribute good code. Under ideal conditions it is more important to be reviewing other people's code and bugs and fixing bugs than it is to be writing your own features. Frequently conditions will not be ideal, but strive to enable others. When there are open questions that need to be resolved, try to prefer the `openstack-discuss`_ list over IRC so that anyone can be involved according to their own schedules and input from unexpected sources can be available. Writing Code ------------ This document cannot enumerate all the many ways to write good Python code. Instead it lists some guidelines that, if followed, will help make sure your code is reviewed promptly and merges quickly. As with everything else in this document, these guidelines will evolve over time and may be violated for special circumstances. If you have questions, ask. See :doc:`/contributor/index` for an overview of Placement and how the various pieces fit together. * Divide your change into a series of commits each of which encapsulates a single unit of functionality but still results in a working service. Smaller changes are easier to review. * If your change is to the HTTP API, familiarize yourself with :ref:`microversion process`. * If there is a series of changes leading to an HTTP API change, exposing that API change should be the last patch in the series. That patch must update the API_ reference and include a `release note`_. * Changes must include tests. There is a separate document on :doc:`/contributor/testing`. * Run ``tox`` before submitting your code to gerrit_. This will run unit and functional tests in both Python 2 and Python 3, and pep8 style checks. Placement tests, including functional, are fast, so this should not be too much of a hardship. By running the tests locally you avoid wasting scarce resources in the CI system. * Keep the tests fast. Avoid sleeps, network connections, and external processes in the tests. * Keep Placement fast. There is a ``placement-perfload`` job that runs with every patch. Within that is a log file, ``/logs/placement-perf.txt[.gz]`` that gives rough timing information for a common operation. We want those numbers to stay small. * We follow the code formatting guidelines of `PEP 8`_. Check your code with ``tox -epep8`` (for all files) or ``tox -efast8`` (for just the files you changed). You will not always agree with the advice provided. Follow it. * Where possible avoid using the visual indent style. Using it can make future changes unnecessarily difficult. This guideline is not enforced by pep8 and has been used throughout the code in the past. There's no need to fix old use. Instead of this .. code-block:: python return_value = self.some_method(arg1, arg2, arg3, arg4) prefer this .. code-block:: python return_value = self.some_method( arg1, arg2, arg3, arg4) New Features ------------ New functionality in Placement is developed as needed to meet new use cases or improve the handling of existing use cases. As a service used by other services in OpenStack, uses cases often originate in those other services. Considerable collaboration with other projects is often required to determine if any changes are needed in the Placement API_ or elsewhere in the project. That interaction should happen in the usual ways: At Project Team Gatherings, on the openstack-discuss_ list, and in IRC. Create a new bug as described in :ref:`new_bug` above. If a spec is required there are some guidelines for creating one: * A file should be created in the `placement code`_ in ``doc/source/specs//approved`` with a filename beginning with the identifier of the bug. For example:: docs/source/specs/train/approved/200056-infinite-resource-classes.rst More details on how to write a spec are included in a ``template.rst`` file found in the ``doc/source/specs`` directory. This may be copied to use as the starting point for a new spec. * Under normal circumstances specs should be proposed near the beginning of a release cycle so there is sufficient time to review the spec and its implementation as well as to make any necessary decisions about limiting the number of specs being worked in the same cycle. Unless otherwise announced at the beginning of a cycle, specs should merge before milestone-2 to be considered relevant for that cycle. Exceptions will be reviewed on a case by case basis. See the `stein schedule`_ for an example schedule. * Work items that are described in a spec should be reflected as tasks created on the originating launchpad bug. Update the bug with additional tasks as they are discovered. Most new tasks will not require updating the spec. * If, when developing a feature, the implementation significantly diverges from the spec, the spec should be updated to reflect the new reality. This should not be considered exceptional: It is normal for there to be learning during the development process which impacts the solution. * Though specs are presented with the Placement documentation and can usefully augment end-user documentation, they are not a substitute. Development of a new feature is not complete without documentation. When a spec was approved in a previous release cycle, but was not finished, it should be re-proposed (via gerrit) to the current cycle. Include ``Previously-Approved: `` in the commit message to highlight that fact. If there have been no changes, core reviewers should feel free to fast-approve (only one ``+2`` required) the change. Project Team Lead Duties ------------------------ PTL duties are enumerated in the `PTL guide`_. .. _Project Team Guide: https://docs.openstack.org/project-team-guide/ .. _openstack-discuss: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss .. _list archive: http://lists.openstack.org/pipermail/openstack-discuss/ .. _Launchpad: https://bugs.launchpad.net/placement .. _new bug: https://bugs.launchpad.net/placement/+filebug .. _gerrit: http://review.opendev.org/ .. _How to Review Changes the OpenStack Way: https://docs.openstack.org/project-team-guide/review-the-openstack-way.html .. _core reviewers: https://review.opendev.org/#/admin/groups/1936,members .. _constructive: https://governance.openstack.org/tc/reference/principles.html#we-value-constructive-peer-review .. _API: https://docs.openstack.org/api-ref/placement/ .. _placement code: https://opendev.org/openstack/placement .. _stein schedule: https://releases.openstack.org/stein/schedule.html .. _release note: https://docs.openstack.org/reno/latest/ .. _PEP 8: https://www.python.org/dev/peps/pep-0008/ .. _PTL guide: https://docs.openstack.org/project-team-guide/ptl.html .. _openstack launchpad doc: https://docs.openstack.org/contributors/common/task-tracking.html#launchpad .. _How to Recheck: https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/contributor/goals.rst0000664000175000017500000000432600000000000024120 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ===== Goals ===== Like many OpenStack projects, placement uses blueprints and specifications to plan and design upcoming work. Sometimes, however, certain types of work fit more in the category of wishlist, or when-we-get-around-to-it. These types of work are often not driven by user or operator feature requests, but are instead related to architectural, maintenance, and technical debt management goals that will make the lives of contributors to the project easier over time. In those cases a specification is too formal and detailed but it is still worthwhile to remember the idea and put it somewhere. That's what this document is for: a place to find and put goals for placement that are related to making contribution more pleasant and keep the project and product healthy, yet are too general to be considered feature requests. This document can also operate as one of several sources of guidance on how not to stray too far from the long term vision of placement. Don't Use Global Config ----------------------- Placement uses `oslo.config`_ to manage configuration, passing a reference to an ``oslo_config.cfg.ConfigOpts`` as required. Before things `were changed`_ a global was used instead. Placement inherited this behavior from nova, where using a global ``CONF`` is the normal way to interact with the configuration options. Continuing this pattern in placement made it difficult for nova to use externalized placement in its functional tests, so the situation was changed. We'd like to keep it this way as it makes the code easier to maintain. .. _oslo.config: https://docs.openstack.org/oslo.config .. _were changed: https://review.opendev.org/#/c/619121/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/contributor/index.rst0000664000175000017500000000315100000000000024115 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =========================== Placement Developer Notes =========================== The Nova project introduced the placement service as part of the Newton release, and it was extracted to its own repository in the Stein release. The service provides an HTTP API to manage inventories of different classes of resources, such as disk or virtual cpus, made available by entities called resource providers. Information provided through the placement API is intended to enable more effective accounting of resources in an OpenStack deployment and better scheduling of various entities in the cloud. The document serves to explain the architecture of the system and to provide some guidance on how to maintain and extend the code. For more detail on why the system was created and how it does its job see :doc:`/index`. For some insight into the longer term goals of the system see :doc:`goals` and :doc:`vision-reflection`. .. toctree:: :maxdepth: 2 contributing architecture api-ref-guideline goals quick-dev testing vision-reflection ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/contributor/quick-dev.rst0000664000175000017500000001561700000000000024710 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =========================== Quick Placement Development =========================== .. note:: This is one of many ways to achieve a simple *live* development environment for the placement service. This isn't meant to be the best way, or the only way. Its purpose is more to demonstrate the steps involved, so that people can learn from those steps and choose to assemble them in whatever ways work best for them. This content was originally written in a `blog post `_, which perhaps explains its folksy tone. Here are some instructions on how to spin up the placement wsgi script with uwsgi and a stubbed out ``placement.conf``, in case you want to see what happens. The idea here is that you want to experiment with the current placement code, using a live database, but you're not concerned with other services, don't want to deal with devstack, but need a level of interaction with the code and process that something like `placedock `_ can't provide. *As ever, even all of the above has lots of assumptions about experience and context. This document assumes you are someone who either is an OpenStack (and probably placement) developer, or would like to be one.* To make this go you need a unix-like OS, with a python3 dev environment, and git and mysql (or postgresql) installed. We'll be doing this work from within a virtualenv, built from the ``tox.ini`` in the placement code. Get The Code ============ The placement code lives at https://opendev.org/openstack/placement . We want to clone that:: git clone https://opendev.org/openstack/placement cd placement Setup The Database ================== We need to 1) create the database, 2) create a virtualenv to have the command, 3) use it to create the tables. The database can have whatever name you like. Whatever you choose, use it throughout this process. We choose ``placement``. You may need a user and password to talk to your database, setting that up is out of scope for this document:: mysql -uroot -psecret -e "DROP DATABASE IF EXISTS placement;" mysql -uroot -psecret -e "CREATE DATABASE placement CHARACTER SET utf8;" You may also need to set permissions:: mysql -uroot -psecret \ -e "GRANT ALL PRIVILEGES ON placement.* TO 'root'@'%' identified by 'secret';" Create a bare minimum placement.conf in the ``/etc/placement`` directory (which you may need to create):: [placement_database] connection = mysql+pymysql://root:secret@127.0.0.1/placement?charset=utf8 .. note:: You may choose the location of the configuration file on the command line when using the ``placement-manage`` command. Make the ``placement-manage`` command available by updating a virtualenv:: tox -epy36 --notest Run the command to create the tables:: .tox/py36/bin/placement-manage db sync You can confirm the tables are there with ``mysqlshow placement`` Run The Service =============== Now we want to run the service. We need to update ``placement.conf`` so it will produce debugging output and use the ``noauth`` strategy for authentication (so we don't also have to run Keystone). Make ``placement.conf`` look like this (adjusting for your database settings):: [DEFAULT] debug = True [placement_database] connection = mysql+pymysql://root:secret@127.0.0.1/placement?charset=utf8 [api] auth_strategy = noauth2 We need to install the uwsgi package into the virtualenv:: .tox/py36/bin/pip install uwsgi And then use uwsgi to run the service. Start it with:: .tox/py36/bin/uwsgi --http :8000 --wsgi-file .tox/py36/bin/placement-api --processes 2 --threads 10 .. note:: Adjust ``processes`` and ``threads`` as required. If you do not provide these arguments the server will be a single process and thus perform poorly. If that worked you'll see lots of debug output and ``spawned uWSGI worker``. Test that things are working from another terminal with curl:: curl -v http://localhost:8000/ Get a list of resource providers with (the ``x-auth-token`` header is required, ``openstack-api-version`` is optional but makes sure we are getting the latest functionality):: curl -H 'x-auth-token: admin' \ -H 'openstack-api-version: placement latest' \ http://localhost:8000/resource_providers The result ought to look something like this:: {"resource_providers": []} If it doesn't then something went wrong with the above and there should be more information in the terminal where ``uwsgi`` is running. From here you can experiment with creating resource providers and related placement features. If you change the placement code, ``ctrl-c`` to kill the uwsgi process and start it up again. For testing, you might enjoy `placecat `_. Here's all of the above as single script. As stated above this is for illustrative purposes. You should make your own:: #!/bin/bash set -xe # Change these as required CONF_DIR=/etc/placement DB_DRIVER=mysql+pymysql # we assume mysql throughout, feel free to change DB_NAME=placement DB_USER=root DB_PASS=secret REPO=https://opendev.org/openstack/placement # Create a directory for configuration to live. [[ -d $CONF_DIR ]] || (sudo mkdir $CONF_DIR && sudo chown $USER $CONF_DIR) # Establish database. Some of this may need sudo powers. Don't be shy # about changing the script. mysql -u$DB_USER -p$DB_PASS -e "DROP DATABASE IF EXISTS $DB_NAME;" mysql -u$DB_USER -p$DB_PASS -e "CREATE DATABASE $DB_NAME CHARACTER SET utf8;" mysql -u$DB_USER -p$DB_PASS -e "GRANT ALL PRIVILEGES ON $DB_NAME.* TO '$DB_USER'@'%' IDENTIFIED BY '$DB_PASS';" # clone the right code git clone $REPO cd placement # establish virtenv tox -epy36 --notest # write placement.conf cat< $CONF_DIR/placement.conf [DEFAULT] debug = True [placement_database] connection = $DB_DRIVER://${DB_USER}:${DB_PASS}@127.0.0.1/${DB_NAME}?charset=utf8 [api] auth_strategy = noauth2 EOF # Create database tables .tox/py36/bin/placement-manage db sync # install uwsgi .tox/py36/bin/pip install uwsgi # run uwsgi .tox/py36/bin/uwsgi --http :8000 --wsgi-file .tox/py36/bin/placement-api --processes 2 --threads 10 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/contributor/testing.rst0000664000175000017500000002000000000000000024453 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =================== Testing Placement =================== Most of the handler code in the placement API is tested using `gabbi`_. Some utility code is tested with unit tests found in ``placement/tests/unit``. The back-end objects are tested with a combination of unit and functional tests found in ``placement/tests/unit/objects`` and ``placement/tests/functional/db``. When writing tests for handler code (that is, the code found in ``placement/handlers``) a good rule of thumb is that if you feel like there needs to be a unit test for some of the code in the handler, that is a good sign that the piece of code should be extracted to a separate method. That method should be independent of the handler method itself (the one decorated by the ``wsgify`` method) and testable as a unit, without mocks if possible. If the extracted method is useful for multiple resources consider putting it in the ``util`` package. As a general guide, handler code should be relatively short and where there are conditionals and branching, they should be reachable via the gabbi functional tests. This is merely a design goal, not a strict constraint. Using Gabbi ----------- Gabbi was developed in the `telemetry`_ project to provide a declarative way to test HTTP APIs that preserves visibility of both the request and response of the HTTP interaction. Tests are written in YAML files where each file is an ordered suite of tests. Fixtures (such as a database) are set up and torn down at the beginning and end of each file, not each test. JSON response bodies can be evaluated with `JSONPath`_. The placement WSGI application is run via `wsgi-intercept`_, meaning that real HTTP requests are being made over a file handle that appears to Python to be a socket. In the placement API the YAML files (aka "gabbits") can be found in ``placement/tests/functional/gabbits``. Fixture definitions are in ``placement/tests/functional/fixtures/gabbits.py``. Tests are frequently grouped by handler name (e.g., ``resource-provider.yaml`` and ``inventory.yaml``). This is not a requirement and as we increase the number of tests it makes sense to have more YAML files with fewer tests, divided up by the arc of API interaction that they test. The gabbi tests are integrated into the functional tox target, loaded via ``placement/tests/functional/test_api.py``. If you want to run just the gabbi tests one way to do so is:: tox -efunctional test_api If you want to run just one yaml file (in this example ``inventory.yaml``):: tox -efunctional api.inventory It is also possible to run just one test from within one file. When you do this every test prior to the one you asked for will also be run. This is because the YAML represents a sequence of dependent requests. Select the test by using the name in the yaml file, replacing space with ``_``:: tox -efunctional api.inventory_post_new_ipv4_address_inventory .. note:: ``tox.ini`` in the placement repository is configured by a ``group_regex`` so that each gabbi YAML is considered a group. Thus, all tests in the file will be run in the same process when running stestr concurrently (the default). Writing More Gabbi Tests ------------------------ The docs for `gabbi`_ try to be complete and explain the `syntax`_ in some depth. Where something is missing or confusing, please log a `bug`_. While it is possible to test all aspects of a response (all the response headers, the status code, every attribute in a JSON structure) in one single test, doing so will likely make the test harder to read and will certainly make debugging more challenging. If there are multiple things that need to be asserted, making multiple requests is reasonable. Since database set up is only happening once per file (instead of once per test) and since there is no TCP overhead, the tests run quickly. While `fixtures`_ can be used to establish entities that are required for tests, creating those entities via the HTTP API results in tests which are more descriptive. For example the ``inventory.yaml`` file creates the resource provider to which it will then add inventory. This makes it easy to explore a sequence of interactions and a variety of responses with the tests: * create a resource provider * confirm it has empty inventory * add inventory to the resource provider (in a few different ways) * confirm the resource provider now has inventory * modify the inventory * delete the inventory * confirm the resource provider now has empty inventory Nothing special is required to add a new set of tests: create a YAML file with a unique name in the same directory as the others. The other files can provide examples. Gabbi can provide a useful way of doing test driven development of a new handler: create a YAML file that describes the desired URLs and behavior and write the code to make it pass. It's also possible to use gabbi against a running placement service, for example in devstack. See `gabbi-run`_ to get started. If you don't want to go to the trouble of using devstack, but do want a live server see :doc:`quick-dev`. Profiling --------- If you wish to profile requests to the placement service, to get an idea of which methods are consuming the most CPU or are being used repeatedly, it is possible to enable a ProfilerMiddleware_ to output per-request python profiling dumps. The environment (:doc:`quick-dev` is a good place to start) in which the service is running will need to have Werkzeug_ added. * If the service is already running, stop it. * Install Werkzeug. * Set an environment variable, ``OS_WSGI_PROFILER``, to a directory where profile results will be written. * Make sure the directory exists. * Start the service, ensuring the environment variable is passed to it. * Make an HTTP request that exercises the code you wish to profile. The profiling results will be in the directory named by ``OS_WSGI_PROFILER``. There are many ways to analyze the files. See `Profiling WSGI Apps`_ for an example. Profiling with OSProfiler ------------------------- To use `OSProfiler`_ with placement: * Add a [profiler] section to the placement.conf: .. code-block:: ini [profiler] connection_string = mysql+pymysql://root:admin@127.0.0.1/osprofiler?charset=utf8 hmac_keys = my-secret-key enabled = True * Include the hmac_keys in your API request: .. code-block:: console $ openstack resource provider list --os-profile my-secret-key The openstack client will return the trace id: .. code-block:: console Trace ID: 67428cdd-bfaa-496f-b430-507165729246 * Extract the trace in html format: .. code-block:: console $ osprofiler trace show --html 67428cdd-bfaa-496f-b430-507165729246 \ --connection-string mysql+pymysql://root:admin@127.0.0.1/osprofiler?charset=utf8 .. _bug: https://github.com/cdent/gabbi/issues .. _fixtures: http://gabbi.readthedocs.io/en/latest/fixtures.html .. _gabbi: https://gabbi.readthedocs.io/ .. _gabbi-run: http://gabbi.readthedocs.io/en/latest/runner.html .. _JSONPath: http://goessner.net/articles/JsonPath/ .. _ProfilerMiddleware: https://werkzeug.palletsprojects.com/en/master/middleware/profiler/ .. _Profiling WSGI Apps: https://anticdent.org/profiling-wsgi-apps.html .. _syntax: https://gabbi.readthedocs.io/en/latest/format.html .. _telemetry: http://specs.openstack.org/openstack/telemetry-specs/specs/kilo/declarative-http-tests.html .. _Werkzeug: https://palletsprojects.com/p/werkzeug/ .. _wsgi-intercept: http://wsgi-intercept.readthedocs.io/ .. _OSProfiler: https://docs.openstack.org/osprofiler/latest/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/contributor/vision-reflection.rst0000664000175000017500000000566700000000000026463 0ustar00zuulzuul00000000000000================= Vision Reflection ================= In late-2018, the OpenStack Technical Committee composed a `technical vision `_ of what OpenStack clouds should look like. This document compares the state of placement relative to that vision to provide some guidance on broad stroke ways in which placement may need to change to match the vision. Since placement is primarily a back-end and admin-only system (at least for now), many aspects of the vision document do not apply, but it is still a useful exercise. Note that there is also a placement :doc:`goals` document. The vision document is divided into three sections, which this document mirrors. This should be a living document which evolves as placement itself evolves. The Pillars of Cloud ==================== The sole interface to the placement service is an HTTP API, meaning that in theory, anything can talk to it, enabling the self-service and application control that define a cloud. However, at the moment the data managed by placement is considered for administrators only. This policy could be changed, but doing so would be a dramatic adjustment in the scope of who placement is for and what it does. Since placement has not yet fully satisfied its original vision to clarify and ease cloud resource allocation such a change should be considered secondary to completing the original goals. OpenStack-specific Considerations ================================= Placement uses microversions to help manage interoperability and bi-directional compatibility. Because placement has used microversions from the very start a great deal of the valuable functionality is only available in an opt-in fashion. In fact, it would be accurate to say that a placement service at the default microversion is incapable of being a placement service. We may wish to evaluate (and publish) if there is a minimum microversion at which placement is useful. To some extent this is already done with the way nova requires specific placement microversions, and for placement to be upgraded in advance of nova. As yet, placement provides no dedicated mechanism for partitioning its resource providers amongst regions. Aggregates can be used for this purpose but this is potentially cumbersome in the face of multi-region use cases where a single placement service is used to manage resources in several clouds. This is an area that is already under consideration, and would bring placement closer to matching the "partitioning" aspect of the vision document. Design Goals ============ Placement already maps well to several of the design goals in the vision document, adhering to fairly standard methods for scalability, reliability, customization, and flexible utilization models. It does this by being a simple web app over a database and not much more. We should strive to keep this. Details of how we plan to do so should be maintained in the :doc:`goals` document. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/index.rst0000664000175000017500000000472300000000000021551 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. =========== Placement =========== The placement API service was introduced in the 14.0.0 Newton release within the nova repository and extracted to the `placement repository`_ in the 19.0.0 Stein release. This is a REST API stack and data model used to track resource provider inventories and usages, along with different classes of resources. For example, a resource provider can be a compute node, a shared storage pool, or an IP allocation pool. The placement service tracks the inventory and usage of each provider. For example, an instance created on a compute node may be a consumer of resources such as RAM and CPU from a compute node resource provider, disk from an external shared storage pool resource provider and IP addresses from an external IP pool resource provider. The types of resources consumed are tracked as **classes**. The service provides a set of standard resource classes (for example ``DISK_GB``, ``MEMORY_MB``, and ``VCPU``) and provides the ability to define custom resource classes as needed. Each resource provider may also have a set of traits which describe qualitative aspects of the resource provider. Traits describe an aspect of a resource provider that cannot itself be consumed but a workload may wish to specify. For example, available disk may be solid state drives (SSD). .. _placement repository: https://opendev.org/openstack/placement Usages ====== .. toctree:: :maxdepth: 2 user/index Command Line Interface ====================== .. toctree:: :maxdepth: 2 cli/index Configuration ============= .. toctree:: :maxdepth: 2 configuration/index Contribution ============ .. toctree:: :maxdepth: 2 contributor/index Specifications ============== .. toctree:: :maxdepth: 2 specs/index Deployment ========== .. toctree:: :maxdepth: 2 install/index Administrator Guide =================== .. toctree:: :maxdepth: 2 admin/index ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2367778 openstack_placement-13.0.0/doc/source/install/0000775000175000017500000000000000000000000021350 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/install/from-pypi.rst0000664000175000017500000002124300000000000024026 0ustar00zuulzuul00000000000000Install and configure Placement from PyPI ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The section describes how to install and configure the placement service using packages from PyPI_. Placement works with Python version 2.7, but version 3.6 or higher is recommended. This document assumes you have a working MySQL server and a working Python environment, including the :ref:`about-pip` package installer. Depending on your environment, you may wish to install placement in a virtualenv_. This document describes how to run placement with uwsgi_ as its web server. This is but one of many different ways to host the service. Placement is a well-behaved WSGI_ application so should be straightforward to host with any WSGI server. If using placement in an OpenStack environment, you will need to ensure it is up and running before starting services that use it but after services it uses. That means after Keystone_, but before anything else. Prerequisites ------------- Before installing the service, you will need to create the database, service credentials, and API endpoints, as described in the following sections. .. _about-pip: pip ^^^ Install `pip `_ from PyPI_. .. note:: Examples throughout this reference material use the ``pip`` command. This may need to be pathed or spelled differently (e.g. ``pip3``) depending on your installation and Python version. python-openstackclient ^^^^^^^^^^^^^^^^^^^^^^ If not already installed, install the ``openstack`` command line tool: .. code-block:: console # pip install python-openstackclient .. _create-database-pypi: Create Database ^^^^^^^^^^^^^^^ Placement is primarily tested with MySQL/MariaDB so that is what is described here. It also works well with PostgreSQL and likely with many other databases supported by sqlalchemy_. To create the database, complete these steps: .. TODO(cdent): Extract this to a shared document for all the install docs. #. Use the database access client to connect to the database server as the ``root`` user or by using ``sudo`` as appropriate: .. code-block:: console # mysql #. Create the ``placement`` database: .. code-block:: console MariaDB [(none)]> CREATE DATABASE placement; #. Grant proper access to the database: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ IDENTIFIED BY 'PLACEMENT_DBPASS'; Replace ``PLACEMENT_DBPASS`` with a suitable password. #. Exit the database access client. .. _configure-endpoints-pypi: Configure User and Endpoints ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. note:: If you are not using Keystone, you can skip the steps below but will need to configure the :oslo.config:option:`api.auth_strategy` setting with a value of ``noauth2``. See also :doc:`/contributor/quick-dev`. .. note:: You will need to authenticate to Keystone as an ``admin`` before making these calls. There are many different ways to do this, depending on how your system was set up. If you do not have an ``admin-openrc`` file, you will have something similar. .. important:: These documents use an endpoint URL of ``http://controller:8778/`` as an example only. You should configure placement to use whatever hostname and port works best for your environment. Using SSL on the default port, with either a domain or path specific to placement, is recommended. For example: ``https://mygreatcloud.com/placement`` or ``https://placement.mygreatcloud.com/``. .. include:: shared/endpoints.rst .. _configure-conf-pypi: Install and configure components -------------------------------- The default location of the placement configuration file is ``/etc/placement/placement.conf``. A different directory may be chosen by setting ``OS_PLACEMENT_CONFIG_DIR`` in the environment. It is also possible to run the service with a partial or no configuration file and set some options in `the environment`_. See :doc:`/configuration/index` for additional configuration settings not mentioned here. .. note:: In the steps below, ``controller`` is used as a stand in for the hostname of the hosts where keystone, mysql, and placement are running. These may be distinct. The keystone host (used for ``auth_url`` and ``www_authenticate_uri``) should be the unversioned public endpoint for the Identity service. .. TODO(cdent): Some of these database steps could be extracted to a shared document used by all the install docs. #. Install placement and required database libraries: .. code-block:: console # pip install openstack-placement pymysql #. Create the ``/etc/placement/placement.conf`` file and complete the following actions: * Create a ``[placement_database]`` section and configure database access: .. path /etc/placement/placement.conf .. code-block:: ini [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement Replace ``PLACEMENT_DBPASS`` with the password you chose for the placement database. * Create ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/placement/placement.conf .. code-block:: ini [api] auth_strategy = keystone # use noauth2 if not using keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you chose for the ``placement`` user in the Identity service. .. note:: The value of ``user_name``, ``password``, ``project_domain_name`` and ``user_domain_name`` need to be in sync with your keystone config. * You may wish to set the :oslo.config:option:`debug` option to ``True`` to produce more verbose log output. #. Populate the ``placement`` database: .. code-block:: console $ placement-manage db sync .. note:: An alternative is to use the :oslo.config:option:`placement_database.sync_on_startup` option. Finalize installation --------------------- Now that placement itself has been installed we need to launch the service in a web server. What follows provides a very basic web server that, while relatively performant, is not set up to be easy to manage. Since there are many web servers and many ways to manage them, such things are outside the scope of this document. Install and run the web server: #. Install the ``uwsgi`` package (these instructions are against version 2.0.18): .. code-block:: console # pip install uwsgi #. Run the server with the placement WSGI application in a terminal window: .. warning:: Make sure you are using the correct ``uwsgi`` binary. It may be in multiple places in your path. The wrong version will fail and complain about bad arguments. .. code-block:: console # uwsgi -M --http :8778 --wsgi-file /usr/local/bin/placement-api \ --processes 2 --threads 10 #. In another terminal confirm the server is running using ``curl``. The URL should match the public endpoint set in :ref:`configure-endpoints-pypi`. .. code-block:: console $ curl http://controller:8778/ The output will look something like this: .. code-block:: json { "versions" : [ { "id" : "v1.0", "max_version" : "1.31", "links" : [ { "href" : "", "rel" : "self" } ], "min_version" : "1.0", "status" : "CURRENT" } ] } Further interactions with the system can be made with osc-placement_. .. _PyPI: https://pypi.org .. _virtualenv: https://pypi.org/project/virtualenv/ .. _uwsgi: https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html .. _WSGI: https://www.python.org/dev/peps/pep-3333/ .. _Keystone: https://docs.openstack.org/keystone/latest/ .. _sqlalchemy: https://www.sqlalchemy.org .. _the environment: https://docs.openstack.org/oslo.config/latest/reference/drivers.html#module-oslo_config.sources._environment .. _osc-placement: https://docs.openstack.org/osc-placement/latest/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/install/index.rst0000664000175000017500000001412600000000000023215 0ustar00zuulzuul00000000000000============ Installation ============ .. note:: Before the Stein release the placement code was in Nova alongside the compute REST API code (nova-api). Make sure that the release version of this document matches the release version you want to deploy. Steps Overview -------------- This subsection gives an overview of the process without going into detail on the methods used. **1. Deploy the API service** Placement provides a ``placement-api`` WSGI script for running the service with Apache, nginx or other WSGI-capable web servers. Depending on what packaging solution is used to deploy OpenStack, the WSGI script may be in ``/usr/bin`` or ``/usr/local/bin``. ``placement-api``, as a standard WSGI script, provides a module level ``application`` attribute that most WSGI servers expect to find. This means it is possible to run it with lots of different servers, providing flexibility in the face of different deployment scenarios. Common scenarios include: * apache2_ with mod_wsgi_ * apache2 with mod_proxy_uwsgi_ * nginx_ with uwsgi_ * nginx with gunicorn_ In all of these scenarios the host, port and mounting path (or prefix) of the application is controlled in the web server's configuration, not in the configuration (``placement.conf``) of the placement application. When placement was `first added to DevStack`_ it used the ``mod_wsgi`` style. Later it `was updated`_ to use mod_proxy_uwsgi_. Looking at those changes can be useful for understanding the relevant options. DevStack is configured to host placement at ``/placement`` on either the default port for http or for https (``80`` or ``443``) depending on whether TLS is being used. Using a default port is desirable. By default, the placement application will get its configuration for settings such as the database connection URL from ``/etc/placement/placement.conf``. The directory the configuration file will be found in can be changed by setting ``OS_PLACEMENT_CONFIG_DIR`` in the environment of the process that starts the application. With recent releases of ``oslo.config``, configuration options may also be set in the environment_. .. note:: When using uwsgi with a front end (e.g., apache2 or nginx) something needs to ensure that the uwsgi process is running. In DevStack this is done with systemd_. This is one of many different ways to manage uwsgi. This document refrains from declaring a set of installation instructions for the placement service. This is because a major point of having a WSGI application is to make the deployment as flexible as possible. Because the placement API service is itself stateless (all state is in the database), it is possible to deploy as many servers as desired behind a load balancing solution for robust and simple scaling. If you familiarize yourself with installing generic WSGI applications (using the links in the common scenarios list, above), those techniques will be applicable here. .. _apache2: http://httpd.apache.org/ .. _mod_wsgi: https://modwsgi.readthedocs.io/ .. _mod_proxy_uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Apache.html .. _nginx: http://nginx.org/ .. _uwsgi: http://uwsgi-docs.readthedocs.io/en/latest/Nginx.html .. _gunicorn: http://gunicorn.org/ .. _first added to DevStack: https://review.opendev.org/#/c/342362/ .. _was updated: https://review.opendev.org/#/c/456717/ .. _systemd: https://review.opendev.org/#/c/448323/ .. _environment: https://docs.openstack.org/oslo.config/latest/reference/drivers.html#environment **2. Synchronize the database** The placement service uses its own database, defined in the :oslo.config:group:`placement_database` section of configuration. The :oslo.config:option:`placement_database.connection` option **must** be set or the service will not start. The command line tool :doc:`/cli/placement-manage` can be used to migrate the database tables to their correct form, including creating them. The database described by the ``connection`` option must already exist and have appropriate access controls defined. Another option for synchronization is to set :oslo.config:option:`placement_database.sync_on_startup` to ``True`` in configuration. This will perform any missing database migrations as the placement web service starts. Whether you choose to sync automaticaly or use the command line tool depends on the constraints of your environment and deployment tooling. **3. Create accounts and update the service catalog** Create a **placement** service user with an **admin** role in Keystone. The placement API is a separate service and thus should be registered under a **placement** service type in the service catalog. Clients of placement, such as the resource tracker in the nova-compute node, will use the service catalog to find the placement endpoint. See :ref:`configure-endpoints-pypi` for examples of creating the service user and catalog entries. Devstack sets up the placement service on the default HTTP port (80) with a ``/placement`` prefix instead of using an independent port. Installation Packages --------------------- This section provides instructions on installing placement from Linux distribution packages. .. warning:: These installation documents are a work in progress. Some of the distribution packages mentioned are not yet available so the instructions **will not work**. The placement service provides an `HTTP API`_ used to track resource provider inventories and usages. More detail can be found at the :doc:`placement overview `. Placement operates as a web service over a data model. Installation involves creating the necessary database and installing and configuring the web service. This is a straightforward process, but there are quite a few steps to integrate placement with the rest of an OpenStack cloud. .. note:: Placement is required by some of the other OpenStack services, notably nova, therefore it should be installed before those other services but after Identity (keystone). .. toctree:: :maxdepth: 1 from-pypi.rst install-obs.rst install-rdo.rst install-ubuntu.rst verify.rst .. _HTTP API: https://docs.openstack.org/api-ref/placement/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/install/install-obs.rst0000664000175000017500000000744400000000000024342 0ustar00zuulzuul00000000000000Install and configure Placement for openSUSE and SUSE Linux Enterprise ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the placement service when using openSUSE or SUSE Linux Enterprise packages. Prerequisites ------------- Before you install and configure the placement service, you must create a database, service credentials, and API endpoints. Create Database ^^^^^^^^^^^^^^^ #. To create the database, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p * Create the ``placement`` database: .. code-block:: console MariaDB [(none)]> CREATE DATABASE placement; * Grant proper access to the database: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ IDENTIFIED BY 'PLACEMENT_DBPASS'; Replace ``PLACEMENT_DBPASS`` with a suitable password. * Exit the database access client. Configure User and Endpoints ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: shared/endpoints.rst Install and configure components -------------------------------- .. include:: note_configuration_vary_by_distribution.rst .. note:: As of the Newton release, SUSE OpenStack packages are shipped with the upstream default configuration files. For example, ``/etc/placement/placement.conf`` has customizations in ``/etc/placement/placement.conf.d/010-placement.conf``. While the following instructions modify the default configuration file, adding a new file in ``/etc/placement/placement.conf.d`` achieves the same result. #. Install the packages: .. code-block:: console # zypper install openstack-placement #. Edit the ``/etc/placement/placement.conf`` file and complete the following actions: * In the ``[placement_database]`` section, configure database access: .. path /etc/placement/placement.conf .. code-block:: ini [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement Replace ``PLACEMENT_DBPASS`` with the password you chose for the placement database. * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/placement/placement.conf .. code-block:: ini [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you chose for the ``placement`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. .. note:: The value of ``user_name``, ``password``, ``project_domain_name`` and ``user_domain_name`` need to be in sync with your keystone config. #. Populate the ``placement`` database: .. code-block:: console # su -s /bin/sh -c "placement-manage db sync" placement .. note:: Ignore any deprecation messages in this output. Finalize installation --------------------- * Enable the placement API Apache vhost: .. code-block:: console # mv /etc/apache2/vhosts.d/openstack-placement-api.conf.sample \ /etc/apache2/vhosts.d/openstack-placement-api.conf # systemctl reload apache2.service ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/install/install-rdo.rst0000664000175000017500000000640100000000000024333 0ustar00zuulzuul00000000000000Install and configure Placement for Red Hat Enterprise Linux and CentOS Stream ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the placement service when using Red Hat Enterprise Linux or CentOS Stream packages. Prerequisites ------------- Before you install and configure the placement service, you must create a database, service credentials, and API endpoints. Create Database ^^^^^^^^^^^^^^^ #. To create the database, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p * Create the ``placement`` database: .. code-block:: console MariaDB [(none)]> CREATE DATABASE placement; * Grant proper access to the database: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ IDENTIFIED BY 'PLACEMENT_DBPASS'; Replace ``PLACEMENT_DBPASS`` with a suitable password. * Exit the database access client. Configure User and Endpoints ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: shared/endpoints.rst Install and configure components -------------------------------- .. include:: note_configuration_vary_by_distribution.rst #. Install the packages: .. code-block:: console # dnf install openstack-placement-api #. Edit the ``/etc/placement/placement.conf`` file and complete the following actions: * In the ``[placement_database]`` section, configure database access: .. path /etc/placement/placement.conf .. code-block:: ini [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement Replace ``PLACEMENT_DBPASS`` with the password you chose for the placement database. * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/placement/placement.conf .. code-block:: ini [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you chose for the ``placement`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. .. note:: The value of ``user_name``, ``password``, ``project_domain_name`` and ``user_domain_name`` need to be in sync with your keystone config. #. Populate the ``placement`` database: .. code-block:: console # su -s /bin/sh -c "placement-manage db sync" placement .. note:: Ignore any deprecation messages in this output. Finalize installation --------------------- * Restart the httpd service: .. code-block:: console # systemctl restart httpd ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/install/install-ubuntu.rst0000664000175000017500000000627000000000000025075 0ustar00zuulzuul00000000000000Install and configure Placement for Ubuntu ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the placement service when using Ubuntu packages. Prerequisites ------------- Before you install and configure the placement service, you must create a database, service credentials, and API endpoints. Create Database ^^^^^^^^^^^^^^^ #. To create the database, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console # mysql * Create the ``placement`` database: .. code-block:: console MariaDB [(none)]> CREATE DATABASE placement; * Grant proper access to the database: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ IDENTIFIED BY 'PLACEMENT_DBPASS'; Replace ``PLACEMENT_DBPASS`` with a suitable password. * Exit the database access client. Configure User and Endpoints ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: shared/endpoints.rst Install and configure components -------------------------------- .. include:: note_configuration_vary_by_distribution.rst #. Install the packages: .. code-block:: console # apt install placement-api #. Edit the ``/etc/placement/placement.conf`` file and complete the following actions: * In the ``[placement_database]`` section, configure database access: .. path /etc/placement/placement.conf .. code-block:: ini [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement Replace ``PLACEMENT_DBPASS`` with the password you chose for the placement database. * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity service access: .. path /etc/placement/placement.conf .. code-block:: ini [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS Replace ``PLACEMENT_PASS`` with the password you chose for the ``placement`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. .. note:: The value of ``user_name``, ``password``, ``project_domain_name`` and ``user_domain_name`` need to be in sync with your keystone config. #. Populate the ``placement`` database: .. code-block:: console # su -s /bin/sh -c "placement-manage db sync" placement .. note:: Ignore any deprecation messages in this output. Finalize installation --------------------- * Reload the web server to adjust to get new configuration settings for placement. .. code-block:: console # service apache2 restart ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/install/note_configuration_vary_by_distribution.rst0000664000175000017500000000046300000000000032333 0ustar00zuulzuul00000000000000.. note:: Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (``...``) in the configuration snippets indicates potential default configuration options that you should retain. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2367778 openstack_placement-13.0.0/doc/source/install/shared/0000775000175000017500000000000000000000000022616 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/install/shared/endpoints.rst0000664000175000017500000001132400000000000025354 0ustar00zuulzuul00000000000000 #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc #. Create a Placement service user using your chosen ``PLACEMENT_PASS``: .. code-block:: console $ openstack user create --domain default --password-prompt placement User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | fa742015a6494a949f67629884fc7ec8 | | name | placement | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ #. Add the Placement user to the service project with the admin role: .. code-block:: console $ openstack role add --project service --user placement admin .. note:: This command provides no output. #. Create the Placement API entry in the service catalog: .. code-block:: console $ openstack service create --name placement \ --description "Placement API" placement +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Placement API | | enabled | True | | id | 2d1a27022e6e4185b86adac4444c495f | | name | placement | | type | placement | +-------------+----------------------------------+ #. Create the Placement API service endpoints: .. note:: Depending on your environment, the URL for the endpoint will vary by port (possibly 8780 instead of 8778, or no port at all) and hostname. You are responsible for determining the correct URL. .. code-block:: console $ openstack endpoint create --region RegionOne \ placement public http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 2b1b2637908b4137a9c2e0470487cbc0 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 2d1a27022e6e4185b86adac4444c495f | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ placement internal http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 02bcda9a150a4bd7993ff4879df971ab | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 2d1a27022e6e4185b86adac4444c495f | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ placement admin http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 3d71177b9e0f406f98cbff198d74b182 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 2d1a27022e6e4185b86adac4444c495f | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/install/verify.rst0000664000175000017500000000514300000000000023411 0ustar00zuulzuul00000000000000=================== Verify Installation =================== Verify operation of the placement service. .. note:: You will need to authenticate to the identity service as an ``admin`` before making these calls. There are many different ways to do this, depending on how your system was set up. If you do not have an ``admin-openrc`` file, you will have something similar. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc #. Perform status checks to make sure everything is in order: .. code-block:: console $ placement-status upgrade check +----------------------------------+ | Upgrade Check Results | +----------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------+ The output of that command will vary by release. See :ref:`placement-status upgrade check ` for details. #. Run some commands against the placement API: * Install the `osc-placement`_ plugin: .. note:: This example uses `PyPI`_ and :ref:`about-pip` but if you are using distribution packages you can install the package from their repository. With the move to python3 you will need to specify **pip3** or install **python3-osc-placement** from your distribution. .. code-block:: console $ pip3 install osc-placement * List available resource classes and traits: .. code-block:: console $ openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | IPV4_ADDRESS | | ... | $ openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_DEVICE_TAGGING | | COMPUTE_NET_ATTACH_INTERFACE | | ... | .. _osc-placement: https://docs.openstack.org/osc-placement/latest/ .. _PyPI: https://pypi.org ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/placement-api-microversion-history.rst0000664000175000017500000000125100000000000027366 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _placement-api-microversion-history: .. include:: ../../placement/rest_api_version_history.rst ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2367778 openstack_placement-13.0.0/doc/source/specs/0000775000175000017500000000000000000000000021017 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2127779 openstack_placement-13.0.0/doc/source/specs/2023.1/0000775000175000017500000000000000000000000021544 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2367778 openstack_placement-13.0.0/doc/source/specs/2023.1/approved/0000775000175000017500000000000000000000000023364 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/2023.1/approved/policy-defaults-improvement.rst0000664000175000017500000000715400000000000031574 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode =========================== Policy Defaults Improvement =========================== https://blueprints.launchpad.net/placement/+spec/policy-defaults-improvement This spec is to improve the placement APIs policy as the directions decided in `RBAC community-wide goal `_ Problem description =================== While discussing the new RBAC (scope_type and project admin vs system admin things) with operators in berlin ops meetup and via emails, and policy popup meetings, we got the feedback that we need to keep the legacy admin behaviour same as it is otherwise it is going to be a big breaking change for many of the operators. Same feedback for scope_type. - https://etherpad.opendev.org/p/BER-2022-OPS-SRBAC - https://etherpad.opendev.org/p/rbac-operator-feedback By considering the feedback, we decided to make all the policy to be project scoped, release project reader role, and not to change the legacy admin behaviour. Use Cases --------- Ideally most operators should be able to run without modifying policy, as such we need to have defaults closure to the usage. Proposed change =============== The `RBAC community-wide goal `_ defines all the direction and implementation usage of policy. This proposal is to implement the phase 1 and phase 2 of the `RBAC community-wide goal `_ Alternatives ------------ Keep the policy defaults same as it is and expect operators to override them to behave as per their usage. Data model impact ----------------- None REST API impact --------------- The placement APIs policy will modified to add reader roles, scoped to projects, and keep legacy behaviour same as it is. Most of the policies will be default to 'admin-or-service' role but we will review every policy rule default while doing the code change. Security impact --------------- Easier to understand policy defaults will help keep the system secure. Notifications impact -------------------- None Other end user impact --------------------- None Performance Impact ------------------ None Other deployer impact --------------------- None Developer impact ---------------- New APIs must add policies that follow the new pattern. Upgrade impact -------------- The scope_type of all the policy rules will be ``project`` if any deployement is running with enforce_scope enabled and with system scope token then they need to use the project scope token. Also, if any API policy defaults have been modified to ``service`` role only (most of the policies will be default to admin-or-service) then the deployment using such APIs need to override them in policy.yaml to continue working for them. Implementation ============== Assignee(s) ----------- Primary assignee: gmann Feature Liaison --------------- Feature liaison: dansmith Work Items ---------- * Scope all policy to project * Add project reader role in policy * Modify policy rule unit tests Dependencies ============ None Testing ======= Modify or add the policy unit tests. Documentation Impact ==================== API Reference should be kept consistent with any policy changes, in particular around the default reader role. References ========== History ======= .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - 2023.1 - Introduced ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/index.rst0000664000175000017500000001013500000000000022660 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ========================= Placement Specifications ========================= Significant feature developments are tracked in documents called specifications. From the Train cycle onward, those documents are kept in this section. Prior to that, Placement specifications were a part of the `Nova Specs`_. The following specifications represent the stages of design and development of resource providers and the Placement service. Implementation details may have changed or be partially complete at this time. * `Generic Resource Pools `_ * `Compute Node Inventory `_ * `Resource Provider Allocations `_ * `Resource Provider Base Models `_ * `Nested Resource Providers`_ * `Custom Resource Classes `_ * `Scheduler Filters in DB `_ * `Scheduler claiming resources to the Placement API `_ * `The Traits API - Manage Traits with ResourceProvider `_ * `Request Traits During Scheduling`_ * `filter allocation candidates by aggregate membership`_ * `perform granular allocation candidate requests`_ * `inventory and allocation data migration`_ (reshaping provider trees) * `handle allocation updates in a safe way`_ .. _Nested Resource Providers: http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/nested-resource-providers.html .. _Request Traits During Scheduling: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/request-traits-in-nova.html .. _filter allocation candidates by aggregate membership: https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/alloc-candidates-member-of.html .. _perform granular allocation candidate requests: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html .. _inventory and allocation data migration: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/reshape-provider-tree.html .. _handle allocation updates in a safe way: https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/add-consumer-generation.html .. _Nova Specs: http://specs.openstack.org/openstack/nova-specs Train ----- Implemented ~~~~~~~~~~~ .. toctree:: :maxdepth: 1 :glob: train/implemented/* In Progress ~~~~~~~~~~~ .. toctree:: :maxdepth: 1 :glob: train/approved/* Xena ---- Implemented ~~~~~~~~~~~ .. toctree:: :maxdepth: 1 :glob: xena/implemented/* In Progress ~~~~~~~~~~~ Yoga ---- Implemented ~~~~~~~~~~~ .. toctree:: :maxdepth: 1 :glob: yoga/implemented/* In Progress ~~~~~~~~~~~ Zed --- Implemented ~~~~~~~~~~~ In Progress ~~~~~~~~~~~ .. toctree:: :maxdepth: 1 :glob: zed/approved/* 2023.1 ------ Implemented ~~~~~~~~~~~ In Progress ~~~~~~~~~~~ .. toctree:: :maxdepth: 1 :glob: 2023.1/approved/* .. toctree:: :hidden: template.rst ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/template.rst0000664000175000017500000002650700000000000023376 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode ======================== Example Spec - The title ======================== Include the URL of your story from StoryBoard: https://storyboard.openstack.org/#!/story/XXXXXXX Introduction paragraph -- why are we doing anything? A single paragraph of prose that operators can understand. The title and this first paragraph should be used as the subject line and body of the commit message respectively. Some notes about the spec process: * Not all blueprints need a spec, start with a story. * The aim of this document is first to define the problem we need to solve, and second agree the overall approach to solve that problem. * This is not intended to be extensive documentation for a new feature. For example, there is no need to specify the exact configuration changes, nor the exact details of any DB model changes. But you should still define that such changes are required, and be clear on how that will affect upgrades. * You should aim to get your spec approved before writing your code. While you are free to write prototypes and code before getting your spec approved, its possible that the outcome of the spec review process leads you towards a fundamentally different solution than you first envisaged. * But API changes are held to a much higher level of scrutiny. As soon as an API change merges, we must assume it could be in production somewhere, and as such, we then need to support that API change forever. To avoid getting that wrong, we do want lots of details about API changes up front. Some notes about using this template: * Your spec should be in ReSTructured text, like this template. * Please wrap text at 79 columns. * The filename in the git repository should start with the StoryBoard story number. For example: ``2005171-allocation-partitioning.rst``. * Please do not delete any of the sections in this template. If you have nothing to say for a whole section, just write: None * For help with syntax, see http://sphinx-doc.org/rest.html * To test out your formatting, build the docs using ``tox -e docs`` and see the generated HTML file in doc/build/html/specs/. The generated file will have an ``.html`` extension where the original has ``.rst``. * If you would like to provide a diagram with your spec, ascii diagrams are often the best choice. http://asciiflow.com/ is a useful tool. If ascii is insufficient, you have the option to use seqdiag_ or actdiag_. .. _seqdiag: http://blockdiag.com/en/seqdiag/index.html .. _actdiag: http://blockdiag.com/en/actdiag/index.html Problem description =================== A detailed description of the problem. What problem is this feature addressing? Use Cases --------- What use cases does this address? What impact on actors does this change have? Ensure you are clear about the actors in each use case: Developer, End User, Deployer etc. Proposed change =============== Here is where you cover the change you propose to make in detail. How do you propose to solve this problem? If this is one part of a larger effort make it clear where this piece ends. In other words, what's the scope of this effort? At this point, if you would like to get feedback on if the problem and proposed change fit in placement, you can stop here and post this for review saying: Posting to get preliminary feedback on the scope of this spec. Alternatives ------------ What other ways could we do this thing? Why aren't we using those? This doesn't have to be a full literature review, but it should demonstrate that thought has been put into why the proposed solution is an appropriate one. Data model impact ----------------- Changes which require modifications to the data model often have a wider impact on the system. The community often has strong opinions on how the data model should be evolved, from both a functional and performance perspective. It is therefore important to capture and gain agreement as early as possible on any proposed changes to the data model. Questions which need to be addressed by this section include: * What new data objects and/or database schema changes is this going to require? * What database migrations will accompany this change? * How will the initial set of new data objects be generated? For example if you need to take into account existing instances, or modify other existing data, describe how that will work. API impact ---------- Each API method which is either added or changed should have the following * Specification for the method * A description of what the method does suitable for use in user documentation * Method type (POST/PUT/GET/DELETE) * Normal http response code(s) * Expected error http response code(s) * A description for each possible error code should be included describing semantic errors which can cause it such as inconsistent parameters supplied to the method, or when a resource is not in an appropriate state for the request to succeed. Errors caused by syntactic problems covered by the JSON schema definition do not need to be included. * URL for the resource * URL should not include underscores; use hyphens instead. * Parameters which can be passed via the url * JSON schema definition for the request body data if allowed * Field names should use snake_case style, not camelCase or MixedCase style. * JSON schema definition for the response body data if any * Field names should use snake_case style, not camelCase or MixedCase style. * Example use case including typical API samples for both data supplied by the caller and the response * Discuss any policy changes, and discuss what things a deployer needs to think about when defining their policy. Note that the schema should be defined as restrictively as possible. Parameters which are required should be marked as such and only under exceptional circumstances should additional parameters which are not defined in the schema be permitted (eg additionalProperties should be False). Reuse of existing predefined parameter types such as regexps for passwords and user defined names is highly encouraged. Security impact --------------- Describe any potential security impact on the system. Some of the items to consider include: * Does this change touch sensitive data such as tokens, keys, or user data? * Does this change alter the API in a way that may impact security, such as a new way to access sensitive information or a new way to log in? * Does this change involve cryptography or hashing? * Does this change require the use of sudo or any elevated privileges? * Does this change involve using or parsing user-provided data? This could be directly at the API level or indirectly such as changes to a cache layer. * Can this change enable a resource exhaustion attack, such as allowing a single API interaction to consume significant server resources? Some examples of this include launching subprocesses for each connection, or entity expansion attacks in XML. For more detailed guidance, please see the OpenStack Security Guidelines as a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These guidelines are a work in progress and are designed to help you identify security best practices. For further information, feel free to reach out to the OpenStack Security Group at openstack-security@lists.openstack.org. Other end user impact --------------------- Aside from the API, are there other ways a user will interact with this feature? * Does this change have an impact on osc-placement? What does the user interface there look like? Performance Impact ------------------ Describe any potential performance impact on the system, for example how often will new code be called, and is there a major change to the calling pattern of existing code. Examples of things to consider here include: * A small change in a utility function or a commonly used decorator can have a large impacts on performance. * Calls which result in a database queries can have a profound impact on performance when called in critical sections of the code. * Will the change include any locking, and if so what considerations are there on holding the lock? Other deployer impact --------------------- Discuss things that will affect how you deploy and configure OpenStack that have not already been mentioned, such as: * What config options are being added? Should they be more generic than proposed? Are the default values ones which will work well in real deployments? * Is this a change that takes immediate effect after its merged, or is it something that has to be explicitly enabled? * If this change is a new binary, how would it be deployed? * Please state anything that those doing continuous deployment, or those upgrading from the previous release, need to be aware of. Also describe any plans to deprecate configuration values or features. Developer impact ---------------- Discuss things that will affect other developers working on OpenStack. Upgrade impact -------------- Describe any potential upgrade impact on the system. Implementation ============== Assignee(s) ----------- Who is leading the writing of the code? Or is this a blueprint where you're throwing it out there to see who picks it up? If more than one person is working on the implementation, please designate the primary author and contact. Primary assignee: Other contributors: Work Items ---------- Work items or tasks -- break the feature up into the things that need to be done to implement it. Those parts might end up being done by different people, but we're mostly trying to understand the timeline for implementation. Dependencies ============ * Include specific references to other specs or stories that this one either depends on or is related to. * If this requires new functionality in another project that is not yet used document that fact. * Does this feature require any new library dependencies or code otherwise not included in OpenStack? Or does it depend on a specific version of a library? Testing ======= Please discuss the important scenarios that need to be tested, as well as specific edge cases we should be ensuring work correctly. Documentation Impact ==================== Which audiences are affected most by this change, and which documentation titles on docs.openstack.org should be updated because of this change? Don't repeat details discussed above, but reference them here in the context of documentation for multiple audiences. References ========== Please add any useful references here. You are not required to have any references. Moreover, this specification should still make sense when your references are unavailable. Examples of what you could include are: * Links to mailing list or IRC discussions * Links to notes from a summit session * Links to relevant research, if appropriate * Anything else you feel it is worthwhile to refer to History ======= Optional section intended to be used each time the spec is updated to describe new design, API or any database schema updated. Useful to let the reader understand how the spec has changed over time. .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - - Introduced ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2127779 openstack_placement-13.0.0/doc/source/specs/train/0000775000175000017500000000000000000000000022134 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2367778 openstack_placement-13.0.0/doc/source/specs/train/approved/0000775000175000017500000000000000000000000023754 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/train/approved/2005473-support-consumer-types.rst0000664000175000017500000002534100000000000032044 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode ======================== Support Consumer Types ======================== Include the URL of your story from StoryBoard: https://storyboard.openstack.org/#!/story/2005473 This spec aims at providing support for services to model ``consumer types`` in placement. While placement defines a consumer to be an entity consuming resources from a provider it does not provide a way to identify similar "types" of consumers and henceforth allow services to group/query them based on their types. This spec proposes to associate each consumer to a particular type defined by the service owning the consumer. Problem description =================== In today's placement world each allocation posted by a service is against a provider for a consumer (ex: for an instance or a migration). However a service may want to distinguish amongst the allocations made against its various types of consumers (ex: nova may want to fetch allocations against instances alone). This is currently not possible in placement and hence the goal is to make placement aware of "types of consumers" for the services. Use Cases --------- * Nova using placement as its `quota calculation system`_: Currently this approach uses the nova_api database to calculate the quota on the "number of instances". In order for nova to be able to use placement to count the number of "instance-consumers", there needs to be a way by which we can differentiate "instance-consumers" from "migration-consumers". * Ironic wanting to differentiate between "standalone-consumer" versus "nova-consumer". Note that it is not within the scope of placement to model the coordination of the consumer type collisions that may arise between multiple services during their definition. Placement will also not be able to identify or verify correct consumer types (eg, INTANCE versus INSTANCE) from the external service's perspective. Proposed change =============== In order to model consumer types in placement, we will add a new ``consumer_types`` table to the placement database which will have two columns: #. an ``id`` which will be of type integer. #. a ``name`` which will be of type varchar (maximum of 255 characters) and this will have a unique constraint on it. The pattern restrictions for the name will be similar to placement traits and resource class names, i.e restricted to only ``^[A-Z0-9_]+$`` with length restrictions being {1, 255}. A sample look of such a table would be: +--------+----------+ | id | name | +========+==========+ | 1 | UNKNOWN | +--------+----------+ | 2 | INSTANCE | +--------+----------+ | 3 | MIGRATION| +--------+----------+ A new column called ``consumer_type_id`` would be added to the ``consumers`` table to map the consumer to its type. The ``POST /allocations`` and ``PUT /allocations/{consumer_uuid}`` REST API's will gain a new (required) key called ``consumer_type`` which is of type string in their request body's through which the caller can specify what type of consumer it is creating or updating the allocations for. If the specified ``consumer_type`` key is not present in the ``consumer_types`` table, a new entry will be created. Also note that once a consumer type is created, it lives on forever. If this becomes a problem in the future for the operators a tool can be provided to clean them up. In order to maintain parity between the request format of ``PUT /allocations/{consumer_uuid}`` and response format of ``GET /allocations/{consumer_uuid}``, the ``consumer_type`` key will also be exposed through the response of ``GET /allocations/{consumer_uuid}`` request. The external services will be able to leverage this ``consumer_type`` key through the ``GET /usages`` REST API which will have a change in the format of its request and response. The request will gain a new optional key called ``consumer_type`` which will enable users to query usages based on the consumer type. The response will group the resource usages by the specified consumer_type (if consumer_type key is not specified it will return the usages for all the consumer_types) meaning it will gain a new ``consumer_type`` key. Per consumer type we will also return a ``consumer_count`` of consumers of that type. See the `API impact`_ section for more details on how this would be done. The above REST API changes and the corresponding changes to the ``/reshaper`` REST API will be available from a new microversion. The existing consumers in placement would be mapped to a default consumer type called ``UNKNOWN`` (which will be the default value while creating the model schema) which means we do not know what type these consumers are and the service to which the consumers belong to needs to update this information if it wants to avail the ``consumer_types`` feature. Alternatives ------------ We could create a new REST API to allow users to create consumer types explicitly but it does not make sense to add a new API for a non-user facing feature. Data model impact ----------------- The placement database will get a new ``consumer_types`` table that will have a default consumer type called ``UNKNOWN`` and the ``consumers`` table will get a new ``consumer_type_id`` column that by default will point to the ``UNKNOWN`` consumer type. The migration is intended to solely be an alembic migration although a comparision can be done for this versus having a separate online data migration to update null values to "UNKNOWN" to pick the faster one. API impact ---------- The new ``POST /allocations`` request will look like this:: { "30328d13-e299-4a93-a102-61e4ccabe474": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "consumer_type": "INSTANCE", # This is new "allocations": { "e10927c4-8bc9-465d-ac60-d2f79f7e4a00": { "resources": { "VCPU": 2, "MEMORY_MB": 3 }, "generation": 4 } } }, "71921e4e-1629-4c5b-bf8d-338d915d2ef3": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "consumer_type": "MIGRATION", # This is new "allocations": {} }, "48c1d40f-45d8-4947-8d46-52b4e1326df8": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "consumer_type": "UNKNOWN", # This is new "allocations": { "e10927c4-8bc9-465d-ac60-d2f79f7e4a00": { "resources": { "VCPU": 4, "MEMORY_MB": 5 }, "generation": 12 } } } } The new ``PUT /allocations/{consumer_uuid}`` request will look like this:: { "allocations": { "4e061c03-611e-4caa-bf26-999dcff4284e": { "resources": { "DISK_GB": 20 } }, "89873422-1373-46e5-b467-f0c5e6acf08f": { "resources": { "MEMORY_MB": 1024, "VCPU": 1 } } }, "consumer_generation": 1, "user_id": "66cb2f29-c86d-47c3-8af5-69ae7b778c70", "project_id": "42a32c07-3eeb-4401-9373-68a8cdca6784", "consumer_type": "INSTANCE" # This is new } Note that ``consumer_type`` is a required key for both these requests at this microversion. The new ``GET /usages`` response will look like this for a request of type ``GET /usages?project_id=&user_id=`` or ``GET /usages?project_id=`` where the consumer_type key is not specified:: { "usages": { "INSTANCE": { "consumer_count": 5, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } "MIGRATION": { "consumer_count": 2, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } "UNKNOWN": { "consumer_count": 1, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } } } The new ``GET /usages`` response will look like this for a request of type ``GET /usages?project_id=&user_id=&consumer_type="INSTANCE"`` or ``GET /usages?project_id=&consumer_type="INSTANCE"`` where the consumer_type key is specified:: { "usages": { "INSTANCE": { "consumer_count": 5, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } } } A special request of the form ``GET /usages?project_id=&consumer_type=all`` will be allowed to enabled users to be able to query for the total count of all the consumers. The response for such a request will look like this:: { "usages": { "all": { "consumer_count": 3, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } } } Note that ``consumer_type`` is an optional key for the ``GET /usages`` request. The above REST API changes and the corresponding changes to the ``/reshaper`` REST API will be available from a new microversion. Security impact --------------- None. Other end user impact --------------------- The external services using this feature like nova should take the responsibility of updating the consumer type of existing consumers from "UNKNOWN" to the actual type through the ``PUT /allocations/{consumer_uuid}`` REST API. Performance Impact ------------------ None. Other deployer impact --------------------- None. Developer impact ---------------- None. Upgrade impact -------------- The ``placement-manage db sync`` command has to be run by the operators in order to upgrade the database schema to accommodate the new changes. Implementation ============== Assignee(s) ----------- Primary assignee: Other contributors: Work Items ---------- * Add the new ``consumer_types`` table and create a new ``consumer_type_id`` column in the ``consumers`` table with a foreign key constraint to the ``id`` column of the ``consumer_types`` table. * Make the REST API changes in a new microversion for: * ``POST /allocations``, * ``PUT /allocations/{consumer_uuid}``, * ``GET /allocations/{consumer_uuid}``, * ``GET /usages`` and * ``/reshaper`` Dependencies ============ None. Testing ======= Unit and functional tests to validate the feature will be added. Documentation Impact ==================== The placement API reference will be updated to reflect the new changes. References ========== .. _quota calculation system: https://review.opendev.org/#/q/topic:bp/count-quota-usage-from-placement History ======= .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - Train - Introduced ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2367778 openstack_placement-13.0.0/doc/source/specs/train/implemented/0000775000175000017500000000000000000000000024437 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=openstack_placement-13.0.0/doc/source/specs/train/implemented/2005297-negative-aggregate-membership.rst 22 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/train/implemented/2005297-negative-aggregate-membership.0000664000175000017500000003272400000000000033155 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode =================================================== Support filtering by forbidden aggregate membership =================================================== https://storyboard.openstack.org/#!/story/2005297 This blueprint proposes to support for negative filtering by the underlying resource provider's aggregate membership. Problem description =================== Placement currently supports ``member_of`` query parameters for the ``GET /resource_providers`` and ``GET /allocation_candidates`` endpoints. This parameter is either "a string representing an aggregate uuid" or "the prefix ``in:`` followed by a comma-separated list of strings representing aggregate uuids". For example:: &member_of=in:,&member_of= would translate logically to: "Candidate resource providers should be in either agg1 or agg2, but definitely in agg3." (See `alloc-candidates-member-of`_ spec for details) However, there is no expression for forbidden aggregates in the API. In other words, we have no way to say "don't use resource providers in this special aggregate for non-special workloads". Use Cases --------- This feature is useful to save special resources for specific users. Use Case 1 ~~~~~~~~~~ Some of the compute host are *Licensed Windows Compute Host*, meaning any VMs booted on this compute host will be considered as licensed Windows image and depending on the usage of VM, operator will charge it to the end-users. As an operator, I want to avoid booting images/volumes other than Windows OS on *Licensed Windows Compute Host*. Use Case 2 ~~~~~~~~~~ Reservation projects like blazar would like to have its own aggregate for host reservation in order to have consumers without any reservations to be scheduled outside of that aggregate in order to save the reserved resources. Proposed change =============== Adjust the handling of the ``member_of`` parameter so that aggregates can be expressed as forbidden. Forbidden aggregates are prefixed with a ``!``. In the following example:: &member_of=! would translate logically to: "Candidate resource providers should *not* be in agg1" This negative expression can also be used in multiple ``member_of`` parameters:: &member_of=in:,&member_of=&member_of=! would translate logically to: "Candidate resource providers must be at least one of agg1 or agg2, definitely in agg3 and definitely *not* in agg4." Note that we don't support ``!`` for arguments to the ``in:`` prefix:: &member_of=in:,,! This would result in HTTP 400 Bad Request error. Instead, we support ``!in:`` prefix:: &member_of=!in:,, which is equivalent to:: member_of=!&member_of=!&member_of=! Nested resource providers ------------------------- For nested resource providers, an aggregate on a root provider automatically spans the whole tree. When a root provider is in forbidden aggregates, the child providers can't be a candidate even if the child provider belongs to no (or another different) aggregate. In the following environments, for example, .. code:: +-----------------------+ | sharing storage (ss1) | | agg: [aggB] | +-----------+-----------+ | aggB +------------------------------+ +--------------|--------------+ | +--------------------------+ | | +------------+------------+ | | | compute node (cn1) | | | |compute node (cn2) | | | | agg: [aggA] | | | | agg: [aggB] | | | +-----+-------------+------+ | | +----+-------------+------+ | | | parent | parent | | | parent | parent | | +-----+------+ +----+------+ | | +----+------+ +----+------+ | | | numa1_1 | | numa1_2 | | | | numa2_1 | | numa2_2 | | | | agg:[aggC]| | agg:[] | | | | agg:[] | | agg:[] | | | +-----+------+ +-----------+ | | +-----------+ +-----------+ | +-------|----------------------+ +-----------------------------+ | aggC +-----+-----------------+ | sharing storage (ss2) | | agg: [aggC] | +-----------------------+ the exclusion constraint is as follows: * ``member_of=!`` excludes "cn1", "numa1_1" and "numa1_2". * ``member_of=!`` excludes "cn2", "numa2_1", "numa2_2", and "ss1". * ``member_of=!`` excludes "numa1_1" and "ss2". Note that this spanning doesn't happen on numbered ``member_of`` parameters, which is used for the granular request: * ``member_of=!`` excludes "cn1" * ``member_of=!`` excludes "cn2" and "ss1" * ``member_of=!`` excludes "numa1_1" and "ss2". See `granular-resource-request`_ spec for details. Alternatives ------------ We can use forbidden traits to exclude specific resource providers, but if we use traits, then we should put Blazar or windows license trait not only on root providers but also on every resource providers in the tree, so we don't take this way. We can also create nova scheduler filters to do post-processing of compute hosts by looking at host aggregate relationships just as `BlazarFilter`_ does today. However, this is inefficient and we don't want to develop/use another filter for the windows license use case. Data model impact ----------------- None. REST API impact --------------- A new microversion will be created which will update the validation for the ``member_of`` parameter on ``GET /allocation_candidates`` and ``GET /resource_providers`` to accept ``!`` both as a prefix on aggregate uuids and as a prefix to the ``in:`` prefix to express that the prefixed aggregate (or the aggregates) is to be excluded from the results. We do not return 400 if an agg UUID is found on both the positive and negative sides of the request. For example:: &member_of=in:,&member_of=! The first member_of would return all resource_providers in either agg1 or agg2, while the second member_of would eliminate those in agg2. The result will be a 200 containing just those resource_providers in agg1. Likewise, we do not return 400 for cases like:: &member_of=&member_of=! As in the previous example, we return 200 with empty results, since this is a syntactically valid request, even though a resource provider cannot be both inside and outside of agg1 at the same time. Security impact --------------- None. Notifications impact -------------------- None. Other end user impact --------------------- None. Performance Impact ------------------ Queries to the database will see a moderate increase in complexity but existing table indexes should handle this with aplomb. Other deployer impact --------------------- None. Developer impact ---------------- This helps us to develop a simple reservation mechanism without having a specific nova filter, for example, via the following flow: 0. Operator who wants to enable blazar sets default forbidden and required membership key in the ``nova.conf``. * The parameter key in the configuration file is something like ``[scheduler]/placement_req_default_forbidden_member_prefix`` and the value is set by the operator to ``reservation:``. * The parameter key in the configuration file is something like ``[scheduler]/placement_req_required_member_prefix`` and the value would is set by the operator to ``reservation:``. 1. Operator starts up the service and makes a host-pool for reservation via blazar API * Blazar makes an nova aggregate with ``reservation:`` metadata on initialization as a blazar's free pool * Blazar puts hosts specified by the operator into the free pool aggregate on demand 2. User uses blazar to make a host reservation and to get the reservation id * Blazar picks up a host from the blazar's free pool * Blazar creates a new nova aggregate for that reservation and set that aggregate's metadata key to ``reservation:`` and puts the reserved host into that aggregate 3. User creates a VM with a flavor/image with ``reservation:`` meta_data/extra_specs to consume the reservation * Nova finds in the flavor that the extra_spec has a key which starts with what is set in ``[scheduler]/placement_req_required_member_prefix``, and looks up the table for aggregates which has the specified metadata:: required_prefix = CONF.scheduler.placement_req_required_member_prefix # required_prefix = 'reservation:' required_meta_data = get_flavor_extra_spec_starts_with(required_prefix) # required_meta_data = 'reservation:' required_aggs = aggs_whose_metadata_is(required_meta_data) # required_aggs = [] * Nova finds out that the default forbidden aggregate metadata prefix, which is set in ``[scheduler]/placement_req_default_forbidden_member_prefix``, is explicitly via the flavor, so skip:: default_forbidden_prefix = CONF.scheduler.placement_req_default_forbidden_member_prefix # default_forbidden_prefix = ['reservation:'] forbidden_aggs = set() if not get_flavor_extra_spec_starts_with(default_forbidden_prefix): # this is skipped because 'reservation:' is in the flavor in this case forbidden_aggs = aggs_whose_metadata_starts_with(default_forbidden_prefix) * Nova calls placement with required and forbidden aggregates:: # We don't have forbidden aggregates in this case ?member_of= 4. User creates a VM with a flavor/image with no reservation, that is, without ``reservation:`` meta_data/extra_specs. * Nova finds in the flavor that the extra_spec has no key which starts with what is set in ``[scheduler]/placement_req_required_member_prefix``, so no required aggregate is obtained:: required_prefix = CONF.scheduler.placement_req_required_member_prefix # required_prefix = 'reservation:' required_meta_data = get_flavor_extra_spec_starts_with(required_prefix) # required_meta_data = '' required_aggs = aggs_whose_metadata_is(required_meta_data) # required_aggs = set() * Nova looks up the table for default forbidden aggregates whose metadata starts with what is set in ``[scheduler]/placement_req_default_forbidden_member_prefix``:: default_forbidden_prefix = CONF.scheduler.placement_req_default_forbidden_member_prefix # default_forbidden_prefix = ['reservation:'] forbidden_aggs = set() if not get_flavor_extra_spec_starts_with(default_forbidden_prefix): # This is not skipped now forbidden_aggs = aggs_whose_metadata_starts_with(default_forbidden_prefix) # forbidden_aggs = * Nova calls placement with required and forbidden aggregates:: # We don't have required aggregates in this case ?member_of=!in: Note that the change in the nova configuration file and change in the request filter is an example and out of the scope of this spec. An alternative for this is to let placement be aware of the default forbidden traits/aggregates (See the `Bi-directional enforcement of traits`_ spec). But we agreed that it is not placement but nova which is responsible for what traits/aggregate is forbidden/required for the instance. Upgrade impact -------------- None. Implementation ============== Assignee(s) ----------- Primary assignee: Tetsuro Nakamura (nakamura.tetsuro@lab.ntt.co.jp) Work Items ---------- * Update the ``ResourceProviderList.get_all_by_filters`` and ``AllocationCandidates.get_by_requests`` methods to change the database queries to filter on "not this aggregate". * Update the placement API handlers for ``GET /resource_providers`` and ``GET /allocation_candidates`` in a new microversion to pass the negative aggregates to the methods changed in the steps above, including input validation adjustments. * Add functional tests of the modified database queries. * Add gabbi tests that express the new queries, both successful queries and those that should cause a 400 response. * Release note for the API change. * Update the microversion documents to indicate the new version. * Update placement-api-ref to show the new query handling. Dependencies ============ None. Testing ======= Normal functional and unit testing. Documentation Impact ==================== Document the REST API microversion in the appropriate reference docs. References ========== * `alloc-candidates-member-of`_ feature * `granular-resource-request`_ feature .. _`alloc-candidates-member-of`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/alloc-candidates-member-of.html .. _`granular-resource-request`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html .. _`BlazarFilter`: https://github.com/openstack/blazar-nova/tree/stable/rocky/blazarnova/scheduler/filters .. _`Bi-directional enforcement of traits`: https://review.opendev.org/#/c/593475/ History ======= .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - Stein - Approved but not implemented * - Train - Reproposed ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/train/implemented/2005575-nested-magic-1.rst0000664000175000017500000006227500000000000030530 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode =================================== Getting On The Nested Magic Train 1 =================================== https://storyboard.openstack.org/#!/story/2005575 This spec describes a cluster of Placement API work to support several interrelated use cases for Train around: * Modeling complex trees such as NUMA layouts, multiple devices, networks. * Requesting affinity [#]_ between/among the various providers/allocations in allocation candidates against such layouts. * Describing granular groups more richly to facilitate the above. * Requesting candidates based on traits that are not necessarily associated with resources. An additional spec, for a feature known as `can_split`_ has been separated out to its own spec to ensure that any delay in it does not impact these features, which are less controversial. .. [#] The kind of affinity we're talking about is best understood by referring to the use case for the `same_subtree`_ feature below. Principles ========== In developing this design, some fundamental concepts have come to light. These are not really changes from the existing architecture, but understanding them becomes more important in light of the changes introduced herein. Resource versus Provider Traits ------------------------------- The database model associates traits with resource providers, not with inventories of resource classes. However, conceptually there are two different categories of traits to consider. .. _`resource traits`: **Resource Traits** are tied to specific resources. For example, ``HW_CPU_X86_AVX2`` describes a characteristic of ``VCPU`` (or ``PCPU``) resources. .. _`provider traits`: **Provider Traits** are characteristics of a provider, regardless of the resources it provides. For example, ``COMPUTE_VOLUME_MULTI_ATTACH`` is a capability of a compute host, not of any specific resource inventory. ``HW_NUMA_ROOT`` describes NUMA affinity among *all* the resources in the inventories of that provider *and* all its descendants. ``CUSTOM_PHYSNET_PUBLIC`` indicates connectivity to the ``public`` network, regardless of whether the associated resources are ``VF``, ``PF``, ``VNIC``, etc.; and regardless of whether those resources reside on the provider marked with the trait or on its descendants. This distinction becomes important when deciding how to model. **Resource traits** need to "follow" their resource class. For example, ``HW_CPU_X86_AVX2`` should be on the provider of ``VCPU`` (or ``PCPU``) resource, whether that's the root or a NUMA child. On the other hand, **provider traits** must stick to their provider, regardless of where resources inventories are placed. For example, ``COMPUTE_VOLUME_MULTI_ATTACH`` should always be on the root provider, as the root provider conceptually represents "the compute host". .. _`Traits Flow Down`: **Alternative: "Traits Flow Down":** There have_ been_ discussions_ around a provider implicitly inheriting the traits of its parent (and therefore all its ancestors). This would (mostly) allow us not to think about the distinction between "resource" and "provider" traits. We ultimately decided against this by a hair, mainly because of this: It makes no sense to say my PGPU is capable of MULTI_ATTACH In addition, IIUC, there are SmartNICs [1] that have CPUs on cards. If someone will want to report/model those CPUs in placement, they will be scared that CPU traits on compute side flow down to those CPUs on NIC despite they are totally different CPUs. [1] https://www.netronome.com/products/smartnic/overview/ ...and because we were able to come up with other satisfactory solutions to our use cases. .. _have: http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005201.html .. _been: http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004817.html .. _discussions: https://review.opendev.org/#/c/662191/3/doc/source/specs/train/approved/2005575-nested-magic-1.rst@266 Group-Specific versus Request-Wide Query Parameters --------------------------------------------------- `granular resource requests`_ introduced a divide between ``GET /allocation_candidates`` query parameters which apply to a particular request group * resources[$S] * required[$S] * member_of[$S] * in_tree[$S] .. _`request-wide`: ...and those which apply to the request as a whole * limit * group_policy This has been fairly obvious thus far; but this spec introduces concepts (such as `root_required`_ and `same_subtree`_) that make it important to keep this distinction in mind. Moving forward, we should consider whether new features and syntax additions make more sense to be group-specific or request-wide. .. _`granular resource requests`: http://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html Proposed change =============== All changes are to the ``GET /allocation_candidates`` operation via new microversions, one per feature described below. arbitrary group suffixes ------------------------ **Use case:** Client code managing request groups for different kinds of resources - which will often come from different providers - may reside in different places in the codebase. For example, the management of compute resources vs. networks vs. accelerators. However, there still needs to be a way for the consuming code to express relationships (such as affinity) among these request groups. For this purpose, API consumers wish to be able to use conventions for request group identifiers. It would also be nice for development and debugging purposes if these designations had some element of human readability. (Merged) code is here: https://review.opendev.org/#/c/657419/ Granular groups are currently restricted to using integer suffixes. We will change this so they can be case-sensitive strings up to 64 characters long comprising alphanumeric (either case), underscore, and hyphen. * 64c so we can fit a stringified UUID (with hyphens) as well as some kind of handy type designation. Like ``resources_PORT_$UUID``. https://review.opendev.org/#/c/657419/4/placement/schemas/allocation_candidate.py@19 * We want to allow uppercase so consumers can make nice visual distinctions like ``resources_PORT...``; we want to allow lowercase because openstack consumers tend to use lowercase UUIDs and this makes them not have to convert them. Placement will use the string in the form it is given and transform it neither on input nor output. If the form does not match constraints a ``400`` response will be returned. https://review.opendev.org/#/c/657419/4/placement/schemas/allocation_candidate.py@19 * **Alternative** Uppercase only so we don't have to worry about case sensitivity or confusing differentiation from the prefixes (which are lowercase). **Rejected** because we prefer allowing lowercase UUIDs, and are willing to give the consumer the rope. https://review.opendev.org/#/c/657419/1/placement/lib.py@31 * Hyphens so we can use UUIDs without too much scrubbing. For purposes of documentation (and this spec), we'll rename the "unnumbered" group to "unspecified" or "unsuffixed", and anywhere we reference "numbered" groups we can call them "suffixed" or "granular" (I think this label is already used in some places). same_subtree ------------ **Use case:** I want to express affinity between/among allocations in separate request groups. For example, that a ``VGPU`` come from a GPU affined to the NUMA node that provides my ``VCPU`` and ``MEMORY_MB``; or that multiple network ``VF``\ s come from the same NIC. A new ``same_subtree`` query parameter will be accepted. The value is a comma-separated list of request group suffix strings ``$S``. Each must exactly match a suffix on a granular group somewhere else in the request. Importantly, the identified request groups need not have a ``resources$S`` (see `resourceless request groups`_). We define "same subtree" as "all of the resource providers satisfying the request group must be rooted at one of the resource providers satisfying the request group". Or put another way: "one of the resource providers satisfying the request group must be the direct ancestor of all the other resource providers satisfying the request group". For example, given a model like:: +--------------+ | compute node | +-------+------+ | +---------+----------+ | | +---------+--------+ +---------+--------+ | numa0 | | numa1 | | VCPU: 4 (2 used) | | VCPU: 4 | | MEMORY_MB: 2048 | | MEMORY_MB: 2048 | +---+--------------+ +---+----------+---+ | | | +---+----+ +---+---+ +---+---+ |fpga0_0 | |fpga1_0| |fpga1_1| |FPGA:1 | |FPGA:1 | |FPGA:1 | +--------+ +-------+ +-------+ to request "two VCPUs, 512MB of memory, and one FPGA from the same NUMA node," my request could include:: ?resources_COMPUTE=VCPU:2,MEMORY_MB:512 &resources_ACCEL=FPGA:1 # NOTE: The suffixes include the leading underscore! &same_subtree=_COMPUTE,_ACCEL This will produce candidates including:: - numa0: {VCPU:2, MEMORY_MB:512}, fpga0_0: {FPGA:1} - numa1: {VCPU:2, MEMORY_MB:512}, fpga1_0: {FPGA:1} - numa1: {VCPU:2, MEMORY_MB:512}, fpga1_1: {FPGA:1} but *not*:: - numa0: {VCPU:2, MEMORY_MB:512}, fpga1_0: {FPGA:1} - numa0: {VCPU:2, MEMORY_MB:512}, fpga1_1: {FPGA:1} - numa1: {VCPU:2, MEMORY_MB:512}, fpga0_0: {FPGA:1} The ``same_subtree`` query parameter is `request-wide`_, but may be repeated. Each grouping is treated independently. Anti-affinity ~~~~~~~~~~~~~ There were discussions about supporting ``!`` syntax in ``same_subtree`` to express anti-affinity (e.g. ``same_subtree=$X,!$Y`` meaning "resources from group ``$Y`` shall *not* come from the same subtree as resources from group ``$X``"). This shall be deferred to a future release. resourceless request groups --------------------------- **Use case:** When making use of `same_subtree`_, I want to be able to identify a provider as a placeholder in the subtree structure even if I don't need any resources from that provider. It is currently a requirement that a ``resources$S`` exist for all ``$S`` in a request. This restriction shall be removed such that a request group may exist e.g. with only ``required$S`` or ``member_of$S``. There must be at least one ``resources`` or ``resources$S`` somewhere in the request, otherwise there will be no inventory to allocate and thus no allocation candidates. If neither is present a ``400`` response will be returned. Furthermore, resourceless request groups must be used with `same_subtree`_. That is, the suffix for each resourceless request group must feature in a ``same_subtree`` somewhere in the request. Otherwise a ``400`` response will be returned. (The reasoning for this restriction_ is explained below.) For example, given a model like:: +--------------+ | compute node | +-------+------+ | +-----------+-----------+ | | +-----+-----+ +-----+-----+ |nic1 | |nic2 | |HW_NIC_ROOT| |HW_NIC_ROOT| +-----+-----+ +-----+-----+ | | +----+----+ +-----+---+ | | | | +--+--+ +--+--+ +--+--+ +--+--+ |pf1_1| |pf1_2| |pf2_1| |pf2_2| |NET1 | |NET2 | |NET1 | |NET2 | |VF:4 | |VF:4 | |VF:2 | |VF:2 | +-----+ +-----+ +-----+ +-----+ a request such as the following, meaning, "Two VFs from the same NIC, one on each of network NET1 and NET2," is legal:: ?resources_VIF_NET1=VF:1 &required_VIF_NET1=NET1 &resources_VIF_NET2=VF:1 &required_VIF_NET2=NET2 # NOTE: there is no resources_NIC_AFFINITY &required_NIC_AFFINITY=HW_NIC_ROOT &same_subtree=_VIF_NET1,_VIF_NET2,_NIC_AFFINITY The returned candidates will include:: - pf1_1: {VF:1}, pf1_2: {VF:1} - pf2_1: {VF:1}, pf2_2: {VF:1} but *not*:: - pf1_1: {VF:1}, pf2_2: {VF:1} - pf2_1: {VF:1}, pf1_2: {VF:1} .. _restriction: Why enforce resourceless + same_subtree? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Taken by itself (without `same_subtree`_), a resourceless request group intuitively means, "There must exist in the solution space a resource provider that satisfies these constraints." But what does "solution space" mean? Clearly it's not the same as `solution path`_, or we wouldn't be able to use it to add resourceless providers to that solution path. So it must encompass at least the entire non-sharing tree around the solution path. Does it also encompass sharing providers associated via aggregate? What would that mean? Since we have not identified any real use cases for resourceless *without* `same_subtree`_ (other than `root_member_of`_ -- see below) making this an error allows us to not have to deal with these questions. root_required ------------- **Use case:** I want to limit allocation candidates to trees `whose root provider`_ has (or does not have) certain traits. For example, I want to limit candidates to only multi-attach-capable hosts; or preserve my Windows-licensed hosts for special use. A new ``root_required`` query parameter will be accepted. The value syntax is identical to that of ``required[$S]``: that is, it accepts a comma-delimited list of trait names, each optionally prefixed with ``!`` to indicate "forbidden" rather than "required". This is a `request-wide`_ query parameter designed for `provider traits`_ specifically on the root provider of the non-sharing tree involved in the allocation candidate. That is, regardless of any group-specific constraints, and regardless of whether the root actually provides resource to the request, results will be filtered such that the root of the non-sharing tree conforms to the constraints specified in ``root_required``. ``root_required`` may not be repeated. .. _`whose root provider`: The fact that this feature is (somewhat awkwardly) restricted to "...trees whose root provider ..." deserves some explanation. This is to fill a gap in use cases that cannot be adequately covered by other query parameters. * To land on a tree (host) with a given trait *anywhere* in its hierarchy, `resourceless request groups`_ without `same_subtree`_ could be used. However, there is no way to express the "forbidden" side of this in a way that makes sense: * A resourceless ``required$S=!FOO`` would simply ensure that a provider *anywhere in the tree* does not have ``FOO`` - which would end up not being restrictive as intended in most cases. * We could define "resourceless forbidden" to mean "nowhere in the tree", but this would be inconsistent and hard to explain. * To ensure that the desired trait is present (or absent) in the *result set*, it would be necessary to attach the trait to a group whose resource constraints will be satisfied by the provider possessing (or lacking) that trait. * This requires the API consumer to understand too much about how the provider trees are modeled; and * It doesn't work in heterogeneous environments where such `provider traits`_ may or may not stick with providers of a specific resource class. This could possibly be mitigated by careful use of `same_subtree`_, but that again requires deep understanding of the tree model, and also confuses the meaning of `same_subtree`_ and `resource versus provider traits`_. * The `traits flow down`_ concept described earlier could help here; but that would still entail attaching `provider traits`_ to a particular request group. Which one? Because the trait isn't associated with a specific resource, it would be arbitrary and thus difficult to explain and justify. .. _`solution path`: **Alternative: "Solution Path":** A more general solution was discussed whereby we would define a "solution path" as: **The set of resource providers which satisfy all the request groups *plus* all the ancestors of those providers, up to the root.** This would allow us to introduce a `request-wide`_ query parameter such as ``solution_path_required``. The idea would be the same as ``root_required``, but the specified trait constraints would be applied to all providers in the "solution path" (required traits must be present *somewhere* in the solution path; forbidden traits must not be present *anywhere* in the solution path). This alternative was rejected because: * Describing the "solution path" concept to API consumers would be hard. * We decided the only real use cases where the trait constraints needed to be applied to providers *other than the root* could be satisfied (and more naturally) in other ways. This section was the result of long discussions `in IRC`_ and on `the review for this spec`_ .. _`in IRC`: http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2019-06-12.log.html#t2019-06-12T15:04:48 .. _`the review for this spec`: https://review.opendev.org/#/c/662191/ root_member_of -------------- .. note:: When this spec was initially written it was not clear whether there was immediate need to implement this feature. This turned out to be the case. The feature was not implemented in the Train cycle. It will be revisted in the future if needed. **Use case:** I want to limit allocation candidates to trees `whose root provider`_ is (or is not) a member of a certain aggregate. For example, I want to limit candidates to only hosts in (or not in) a specific availability zone. .. note:: We "need" this because of the restriction_ that resourceless request groups must be used with `same_subtree`_. Without that restriction, a resourceless ``member_of`` would match a provider anywhere in the tree, including the root. ``root_member_of`` is conceptually identical to `root_required`_, but for aggregates. Like ``member_of[$S]``, ``root_member_of`` supports ``in:``, and can be repeated (in contrast to ``[root_]required[$S]``). Default group_policy to none ---------------------------- A single ``isolate`` setting that applies to the whole request has consistently been shown to be inadequate/confusing/frustrating for all but the simplest anti-affinity use cases. We're not going to get rid of ``group_policy``, but we're going to make it no longer required, defaulting to ``none``. This will allow us to get rid of `at least one hack`_ in nova and provide a clearer user experience, while still allowing us to satisfy simple NUMA use cases. In the future a `granular isolation`_ syntax should make it possible to satisfy more complex scenarios. .. _at least one hack: https://review.opendev.org/657796 .. _granular isolation: (Future) Granular Isolation --------------------------- .. note:: This is currently out of scope, but we wanted to get it written down. The features elsewhere in this spec allow us to specify affinity pretty richly. But anti-affinity (within a provider tree - not between providers) is still all (``group_policy=isolate``) or nothing (``group_policy=none``). We would like to be able to express anti-affinity between/among subsets of the suffixed groups in the request. We propose a new `request-wide`_ query parameter key ``isolate``. The value is a comma-separated list of request group suffix strings ``$S``. Each must exactly match a suffix on a granular group somewhere else in the request. This works on `resourceless request groups`_ as well as those with resources. It is mutually exclusive with the ``group_policy`` query parameter: 400 if both are specified. The effect is the resource providers satisfying each group ``$S`` must satisfy *only* their respective group ``$S``. At one point I thought it made sense for ``isolate`` to be repeatable. But now I can't convince myself that ``isolate={set1}&isolate={set2}`` can ever produce an effect different from ``isolate={set1|set2}``. Perhaps it's because different ``isolate``\ s could be coming from different parts of the calling code? Another alternative would be to isolate the groups from *each other* but not from *other groups*, in which case repeating ``isolate`` could be meaningful. But confusing. Thought will be needed. Interactions ------------ Some discussion on these can be found in the neighborhood of http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2019-05-10.log.html#t2019-05-10T22:02:43 group_policy + same_subtree ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``group_policy=isolate`` forces the request groups identified in ``same_subtree`` to be satisfied by different providers, whereas ``group_policy=none`` would also allow ``same_subtree`` to degenerate to "same provider". For example, given the following model:: +--------------+ | compute node | +-------+------+ | +-----------+-----------+ | | +-----+-----+ +-----+-----+ |nic1 | |nic2 | |HW_NIC_ROOT| |HW_NIC_ROOT| +-----+-----+ +-----+-----+ | | +----+----+ ... | | +--+--+ +--+--+ |pf1_1| |pf1_2| |VF:4 | |VF:4 | +-----+ +-----+ a request for "Two VFs from different PFs on the same NIC":: ?resources_VIF1=VF:1 &resources_VIF2=VF:1 &required_NIC_AFFINITY=HW_NIC_ROOT &same_subtree=_VIF1,_VIF2,_NIC_AFFINITY &group_policy=isolate will return only one candidate:: - pf1_1: {VF:1}, pf1_2: {VF:1} whereas the same request with ``group_policy=none``, meaning "Two VFs from the same NIC":: ?resources_VIF1=VF:1 &resources_VIF2=VF:1 &required_NIC_AFFINITY=HW_NIC_ROOT &same_subtree=_VIF1,_VIF2,_NIC_AFFINITY &group_policy=none will return two additional candidates where both ``VF``\ s are satisfied by the same provider:: - pf1_1: {VF:1}, pf1_2: {VF:1} - pf1_1: {VF:2} - pf1_2: {VF:2} group_policy + resourceless request groups ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Resourceless request groups are treated the same as any other for the purposes of ``group_policy``: * If your resourceless request group is suffixed, ``group_policy=isolate`` means the provider satisfying the resourceless request group will not be able to satisfy any other suffixed group. * If your resourceless request group is unsuffixed, it can be satisfied by *any* provider in the tree, since the unsuffixed group isn't isolated (even with ``group_policy=isolate``). This is important because there are_ cases_ where we want to require certain traits (usually `provider traits`_), and don't want to figure out which other request group might be requesting resources from the same provider. same_subtree + resourceless request groups ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ These *must* be used together -- see `Why enforce resourceless + same_subtree?`_ Impacts ======= Data model impact ----------------- There should be no changes to database table definitions, but the implementation will almost certainly involve adding/changing database queries. There will also likely be changes to python-side objects representing meta-objects used to manage information between the database and the REST layer. However, the data models for the JSON payloads in the REST layer itself will be unaffected. Performance Impact ------------------ The work for ``same_subtree`` will probably (at least initially) be done on the python side as additional filtering under ``_merge_candidates``. This could have some performance impact especially on large data sets. Again, we should optimize requests without ``same_subtree``, where ``same_subtree`` refers to only one group, where no nested providers exist in the database, etc. Resourceless request groups may add a small additional burden to database queries, but it should be negligible. It should be relatively rare in the wild for a resourceless request group to be satisfied by a provider that actually provides no resource to the request, though there are_ cases_ where a resourceless request group would be useful even though the provider *does* provide resources to the request. .. _are: https://review.opendev.org/#/c/645316/ .. _cases: https://review.opendev.org/#/c/656885/ Documentation Impact -------------------- The new query parameters will be documented in the API reference. Microversion paperwork will be done. :doc:`/user/provider-tree` will be updated (and/or split off of). Security impact --------------- None Other end user impact --------------------- None Other deployer impact --------------------- None Developer impact ---------------- None Upgrade impact -------------- None Implementation ============== Assignee(s) ----------- * cdent * tetsuro * efried * others Dependencies ============ None Testing ======= Code for a gabbi fixture with some complex and interesting characteristics is merged here: https://review.opendev.org/#/c/657463/ Lots of functional testing, primarily via gabbi, will be included. It wouldn't be insane to write some PoC consuming code on the nova side to validate assumptions and use cases. References ========== ...are inline History ======= .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - Train - Introduced .. _can_split: https://review.opendev.org/658510 ././@PaxHeader0000000000000000000000000000025400000000000011456 xustar0000000000000000150 path=openstack_placement-13.0.0/doc/source/specs/train/implemented/placement-resource-provider-request-group-mapping-in-allocation-candidates.rst 22 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/train/implemented/placement-resource-provider-request-gr0000664000175000017500000004423200000000000034110 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode ========================================================================== Provide resource provider - request group mapping in allocation candidates ========================================================================== https://blueprints.launchpad.net/nova/+spec/placement-resource-provider-request-group-mapping-in-allocation-candidates To support QoS minimum bandwidth policy during server scheduling Neutron needs to know which resource provider provides the bandwidth resource for each port in the server create request. Similar needs arise in case of handling VGPUs and accelerator devices. Problem description =================== Placement supports granular request groups in the ``GET allocation_candidates`` query but the returned allocation candidates do not contain explicit information about which granular request group is fulfilled by which RP in the candidate. For example the resource request of a Neutron port is mapped to a granular request group by Nova towards Placement during scheduling. After scheduling Neutron needs the information about which port got allocation from which RP to set up the proper port binding towards those network device RPs. Similar examples can be created with VGPU and accelerator devices. Doing this mapping in Nova is possible (see the `current implementation`_) but scales pretty badly even for small amount of ports in a single server create request. See the `Non-scalable Nova based solution`_ section with detailed examples and analysis. On the other hand when Placement builds an allocation candidate it does that by `building allocations for each granular request group`_. Therefore Placement could include the necessary mapping information in the response with significantly less effort. So doing the mapping in Nova also duplicates logic that is already implemented in Placement. Use Cases --------- The use case of the `bandwidth resource provider spec`_ applies here because to fulfill that use case in a scalable way we need to consider the change proposed in this spec. Similarly handling VGPUs and accelerator devices requires this mapping information as well. Proposed change =============== Extend the response of the ``GET /allocation_candidates`` API with an extra field ``mapping`` for each candidate. This field contains a mapping between resource request group names and RP UUIDs for each candidate to express which RP provides the resource for which request groups. Alternatives ------------ For API alternatives about the proposed REST API change see the REST API section. Non-scalable Nova based solution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Given a single compute with the following inventories:: Compute RP (name=compute1, uuid=compute_uuid) + CPU = 1 | MEMORY = 1024 | DISK = 10 | +--+Network agent RP (for SRIOV agent), + uuid=sriov_agent_uuid | | +--+Physical network interface RP | uuid = uuid5(compute1:eth0) | resources: | NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND=2000 | NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND=2000 | traits: | CUSTOM_PHYSNET_1 | CUSTOM_VNIC_TYPE_DIRECT | +--+Physical network interface RP uuid = uuid5(compute1:eth1) resources: NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND=2000 NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND=2000 traits: CUSTOM_PHYSNET_1 CUSTOM_VNIC_TYPE_DIRECT Example 1 - boot with a single port having bandwidth request ............................................................ Neutron port:: { 'id': 'da941911-a70d-4aac-8be0-c3b263e6fd4f', 'resource_request': { "resources": { "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND": 1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND": 1000}, "required": ["CUSTOM_PHYSNET_1", "CUSTOM_VNIC_TYPE_DIRECT"] } } Placement request during scheduling:: GET /placement/allocation_candidates? limit=1000& resources=DISK_GB=1,MEMORY_MB=512,VCPU=1& required1=CUSTOM_PHYSNET_1,CUSTOM_VNIC_TYPE_DIRECT& resources1=NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND=1000, NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND=1000 Placement response:: { "allocation_requests":[ { "allocations":{ uuid5(compute1:eth0):{ "resources":{ "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000 } }, compute_uuid:{ "resources":{ "MEMORY_MB":512, "DISK_GB":1, "VCPU":1 } } } }, // ... another similar allocations with uuid5(compute1:eth1) ], "provider_summaries":{ // ... } } Filter scheduler selects the first candidate that points to uuid5(compute1:eth0) The nova-compute needs to pass RP UUID which provides resource for each port to Neutron in the port binding. To be able to do that nova (in the `current implementation`_ the nova-conductor) needs to find the RP in the selected allocation candidate which provides the resources the Neutron port is requested. The `current implementation`_ does this by checking which RP provides the matching resource classes and resource amounts. During port binding nova updates the port with that network device RP:: { "id":"da941911-a70d-4aac-8be0-c3b263e6fd4f", "resource_request":{ "resources":{ "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000 }, "required":[ "CUSTOM_PHYSNET_1", "CUSTOM_VNIC_TYPE_DIRECT" ] }, "binding:host_id":"compute1", "binding:profile":{ "allocation": uuid5(compute1:eth0) }, } This scenario is easy as only one port is requesting bandwidth resources so there will be only one RP in the each allocation candidate that provides such resources. Example 2 - boot with two ports having bandwidth request ........................................................ Neutron port1:: { 'id': 'da941911-a70d-4aac-8be0-c3b263e6fd4f', 'resource_request': { "resources": { "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND": 1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND": 1000}, "required": ["CUSTOM_PHYSNET_1", "CUSTOM_VNIC_TYPE_DIRECT"] } } Neutron port2:: { 'id': '2f2613ce-95a9-490a-b3c4-5f1c28c1f886', 'resource_request': { "resources": { "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND": 1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND": 2000}, "required": ["CUSTOM_PHYSNET_1", "CUSTOM_VNIC_TYPE_DIRECT"] } } Placement request during scheduling:: GET /placement/allocation_candidates? group_policy=isolate& limit=1000& resources=DISK_GB=1,MEMORY_MB=512,VCPU=1& required1=CUSTOM_PHYSNET_1,CUSTOM_VNIC_TYPE_DIRECT& resources1=NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND=1000, NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND=1000& required2=CUSTOM_PHYSNET_1,CUSTOM_VNIC_TYPE_DIRECT& resources2=NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND=1000, NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND=2000 In the above request the granular request group1 is generated from port1 and granular request group2 is generated from port2. Placement response:: { "allocation_requests":[ { "allocations":{ uuid5(compute1:eth0):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":1000 } }, uuid5(compute1:eth1):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":2000 } }, compute_uuid:{ "resources":{ "MEMORY_MB":512, "DISK_GB":1, "VCPU":1 } } } }, // ... another similar allocation_request where the allocated // amounts are reversed between uuid5(compute1:eth0) and // uuid5(compute1:eth1) ], "provider_summaries":{ // ... } } Filter scheduler selects the first candidate. Nova needs to find the RP in the selected allocation candidate which provides the resources for each Neutron port request. For the selected allocation candidate there are two possible port - RP mappings but only one valid mapping if we consider the bandwidth amounts: * port1 - uuid5(compute1:eth0) * port2 - uuid5(compute1:eth1) When Nova tries to map the first port, port1, then both uuid5(compute1:eth0) and uuid5(compute1:eth1) still has enough resources in the allocation request to match with the request of port1. So at that point Nova can map port1 to uuid5(compute1:eth1). However this means that Nova will not find any viable mapping later for port2 and therefore Nova has to go back an retry to create the mapping with port1 mapped to the other alternative. This means that Nova needs to implement a full backtracking algorithm to find the proper mapping. Scaling considerations ...................... With 4 RPs and 4 ports, in worst case, we have 4! (24) possible mappings and each mappings needs 4 steps to be generated (assuming that in the worst case the mapping of the 4th port is the one that fails). So this backtrack makes 96 steps. So I think this code will scale pretty badly. Note that our example uses the group_policy=isolate query param so the RPs in the allocation candidate cannot overlap. If we set group_policy=none and therefore allow RP overlapping then the necessary calculation step could grow even more. Note that even if having more than 4 ports for an server considered unrealistic, additional granular request groups can appear in the allocation candidate request from other sources than Neutron, e.g. from flavor extra_spec due to VGPUs or from Cyborg due to accelerators. Data model impact ----------------- None REST API impact --------------- Extend the response of the ``GET /allocation_candidates`` API with an extra field ``mappings`` for each candidate in a new microversion. This field contains a mapping between resource request group names and RP UUIDs for each candidate to express which RP provides the resource for which request groups. For the request:: GET /placement/allocation_candidates? resources=DISK_GB=1,MEMORY_MB=512,VCPU=1& required1=CUSTOM_PHYSNET_1,CUSTOM_VNIC_TYPE_DIRECT& resources1=NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND=1000, NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND=1000& required2=CUSTOM_PHYSNET_1,CUSTOM_VNIC_TYPE_DIRECT& resources2=NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND=1000, NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND=2000 Placement would return the response:: { "allocation_requests":[ { "allocations":{ uuid5(compute1:eth0):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":1000 }, }, uuid5(compute1:eth1):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":2000 }, }, compute_uuid:{ "resources":{ "MEMORY_MB":512, "DISK_GB":1, "VCPU":1 }, } }, "mappings": { "1": [uuid5(compute1:eth0)], "2": [uuid5(compute1:eth1)], "": [compute_uuid], }, }, { "allocations":{ uuid5(compute1:eth1):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":1000 }, }, uuid5(compute1:eth0):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":2000 }, }, compute_uuid:{ "resources":{ "MEMORY_MB":512, "DISK_GB":1, "VCPU":1 }, } }, "mappings": { "1": [uuid5(compute1:eth1)], "2": [uuid5(compute1:eth0)], "": [compute_uuid], }, }, ], "provider_summaries":{ // unchanged } } The numbered groups are always satisfied by a single RP so the length of the mapping value will be always 1. However the unnumbered group might be satisfied by more than one RPs so the length of the mapping value there can be bigger than 1. This new field will be added to the schema for ``POST /allocations``, ``PUT /allocations/{consumer_uuid}``, and ``POST /reshaper`` so the client does not need to strip it from the candidate before posting that back to Placement to make the allocation. The contents of the field will be ignored by these operations. *Alternatively* the mapping can be added as a separate top level key to the response. Response:: { "allocation_requests":[ { "allocations":{ uuid5(compute1:eth0):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":1000 }, }, uuid5(compute1:eth1):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":2000 }, }, compute_uuid:{ "resources":{ "MEMORY_MB":512, "DISK_GB":1, "VCPU":1 }, } } }, { "allocations":{ uuid5(compute1:eth0):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":2000 }, }, uuid5(compute1:eth1):{ "resources":{ "NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND":1000, "NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND":1000 }, }, compute_uuid:{ "resources":{ "MEMORY_MB":512, "DISK_GB":1, "VCPU":1 }, } } }, ], "provider_summaries":{ // unchanged } "resource_provider-request_group-mappings":[ { "1": [uuid5(compute1:eth0)], "2": [uuid5(compute1:eth1)], "": [compute_uuid], }, { "1": [uuid5(compute1:eth1)], "2": [uuid5(compute1:eth0)], "": [compute_uuid], } ] } This has the advantage that the allocation requests are unchanged and therefore still can be transparently sent back to placement to do the allocation. This has the disadvantage that one mapping in the ``resource_provider-request_group-mappings`` connected to one candidate in the allocation_requests list by the list index only. We decided to go with the primary proposal. Security impact --------------- None Notifications impact -------------------- None Other end user impact --------------------- None Performance Impact ------------------ None Other deployer impact --------------------- None Developer impact ---------------- None Upgrade impact -------------- None Implementation ============== Assignee(s) ----------- Primary assignee: None Work Items ---------- * Extend the `placement allocation candidate generation algorithm`_ to return the mapping that is internally calculated. * Extend the API with a new microversion to return the mapping to the API client as well * Within the same microverison extend the JSON schema for ``POST /allocations``, ``PUT /allocations/{uuid}``, and ``POST /reshaper`` to accept (and ignore) the mappings key. Dependencies ============ None Testing ======= New gabbi tests for the new API microversion and unit test to cover the unhappy path. Documentation Impact ==================== Placement API ref needs to be updated with the new microversion. References ========== .. _`building allocations for each granular request group`: https://github.com/openstack/nova/blob/6522ea3ecfe99cca3fb33258b11e5a1f34e6e8f0/nova/api/openstack/placement/objects/resource_provider.py#L4113 .. _`bandwidth resource provider spec`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html .. _`current implementation`: https://github.com/openstack/nova/blob/58a1fcc7851930febdb4c1c7ed49357337151f0c/nova/objects/request_spec.py#L761 .. _`placement allocation candidate generation algorithm`: https://github.com/openstack/placement/blob/57026255615679122e6f305dfa3520c012f57ca7/placement/objects/allocation_candidate.py#L207 .. _`Proposed in nova spec repo`: https://review.opendev.org/#/c/597601 History ======= .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - Stein - `Proposed in nova spec repo`_ but was not approved * - Train - Re-proposed in the placement repo ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2127779 openstack_placement-13.0.0/doc/source/specs/xena/0000775000175000017500000000000000000000000021752 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2367778 openstack_placement-13.0.0/doc/source/specs/xena/implemented/0000775000175000017500000000000000000000000024255 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/xena/implemented/allow-provider-re-parenting.rst0000664000175000017500000001754200000000000032357 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode ======================================== Allow provider re-parenting in placement ======================================== https://storyboard.openstack.org/#!/story/2008764 This spec proposes to allow re-parenting and un-parenting (or orphaning) RPs via ``PUT /resource_providers/{uuid}`` API in Placement. Problem description =================== Today placement API only allows change the parent of an RP from None to a valid RP UUID. However there are use case when moving an RP between parents make sense. Use Cases --------- * An existing PGPU RP needs to be moved under the NUMA RP when NUMA is modeled. * We have a `neutron bug`_ that introduced an unwanted change causing that SRIOV PF RPs was created under the root RP instead of under the neutron agent RP. We can fix the broken logic in neutron but we cannot fix the already wrongly parented RP in the DB via the placement API. .. _`neutron bug`: https://bugs.launchpad.net/neutron/+bug/1921150 Proposed change =============== Re-parenting is rejected today and the code has the following `comment`_ : TODO(jaypipes): For now, "re-parenting" and "un-parenting" are not possible. If the provider already had a parent, we don't allow changing that parent due to various issues, including: * if the new parent is a descendant of this resource provider, we introduce the possibility of a loop in the graph, which would be very bad * potentially orphaning heretofore-descendants So, for now, let's just prevent re-parenting... .. _`comment`: https://github.com/openstack/placement/blob/6f00ba5f685183539d0ebf62a4741f2f6930e051/placement/objects/resource_provider.py#L777 The first reason is moot as the loop check is already needed and implemented for the case when the parent is updated from None to an RP. The second reason does not make sense to me. By moving an RP under another RP all the descendants should be moved as well. Similarly how the None -> UUID case works today. So I don't see how can we orphan any RP by re-parenting. I see the following possible cases of move: * RP moved upwards, downwards, side-wards in the same RP tree * RP moved to a different tree * RP moved to top level, becoming a new root RP From placement perspective every case results in one or more valid RP trees. Based on the data model if there was allocations against the moved RP those allocations will still refer to the RP after the move. This means that if a consumer has allocation against a single RP tree before the move might have allocation against multiple trees after the RP move. Such consumer is already supported today. An RP move might invalidate the original intention of the consumer. If the consumer used an allocation candidate query to select and allocate resources then by such query the consumer defined a set of rules (e.g. in_tree, same_subtree) the allocation needs to fulfill. The rules might not be valid after an RP is moved. However placement never promised to keep such invariant as that would require the storage of the rules and correlating allocation candidate queries and allocations. Moreover such issue can already be created with the POST /reshape API as well. Therefore keeping any such invariant is the responsibility of the client. So I propose to start supporting all form of RP re-parenting in a new placement API microversion. Alternatives ------------ See the API alternatives below. Data model impact ----------------- None REST API impact --------------- In a new microversion allow changing the parent_uuid of a resource provider to None or to any valid RP uuid that does not cause a loop in any of the trees via the ``PUT /resource_providers/{uuid}`` API. Protecting against unwanted changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As noted above re-parenting can significantly change the RP model in the Placement database. So such action needs to be done carefully. While the Placement API is already admin only by default, the request is raised on the Xena PTG for extra safety measures against unintentional parent changes. During the spec discussion every the reviewer expressed the view that such safety measure is not really needed. So this spec only propose to use the new microversion and extensive documentation to signal the new behavior. Still there is the list of alternatives discussed during the review: * *Do nothing*: While it is considered not safe enough during the PTG, during the spec review we ended up choosing this as the main solution. * *A new query parameter*: A new query parameter is proposed for the ``PUT /resource_providers/{uuid}`` API called ``allow_reparenting`` the default value of the query parameter is ``False`` and the re-parenting cases defined in this spec is only accepted by Placement if the request contains the new query parameter with the ``True``. It is considered hacky to add a query parameter for a PUT request. * *A new field in the request body*: This new field would have the same meaning as the proposed query parameter but it would be put into the request body. It is considered non-RESTful as such field is not persisted or returned as the result of the PUT request as it does not belong to the representation of the ResourceProvider entity the PUT request updates. * *A new Header*: Instead of a new query paramtere use a new HTTP header ``x-openstack-placement-allow-provider-reparenting:True``. As the name shows this needs a lot more context encoded in it to be specific for the API it modifies while the query parameter already totally API specific. * *Use a PATCH request for updating the parent*: While this would make the parent change more explicit it would also cause great confusion for the client for multiple reasons: 1) Other fields of the same resource provider entity can be updated via the PUT request, but not the ``parent_uuid`` field. 2) Changing the ``parent_uuid`` field from None to a valid RP uuid is supported by the PUT request but to change it from one RP uuid to another would require a totally different ``PATCH`` request. * *Use a sub resource*: Signal the explicit re-parenting either in a form of ``PUT /resource-providers/{uuid}/force`` or ``PUT /resource-providers/{uuid}/parent_uuid/{parent}``. While the second option seems to be acceptable to multiple reviewers, I think it will be confusing similarly to ``PATCH``. It would create another way to update a field of an entity while other fields still updated directly on the parent resource. Security impact --------------- None Notifications impact -------------------- N/A Other end user impact --------------------- None Performance Impact ------------------ The loop detection and the possible update of all the RPs in the changed subtree with a new ``root_provider_id`` needs extra processing. However the re-parenting operation is considered very infrequent. So the overall Placement performance is not affected. Other deployer impact --------------------- None Developer impact ---------------- None Upgrade impact -------------- None Implementation ============== Assignee(s) ----------- Primary assignee: balazs-gibizer Feature Liaison --------------- Feature liaison: None Work Items ---------- * Add a new microversion to the Placement API. Implement an extended loop detection and update ``root_provider_id`` of the subtree if needed. * Mark the new microversion osc-placement as supported. Dependencies ============ None Testing ======= * Unit testing * Gabbit API testing Documentation Impact ==================== * API doc needs to be updated. Warn the user that this is a potentially dangerous operation. References ========== None History ======= .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - Xena - Introduced ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/xena/implemented/support-consumer-types.rst0000664000175000017500000002456400000000000031531 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode ====================== Support Consumer Types ====================== https://storyboard.openstack.org/#!/story/2005473 This spec aims at providing support for services to model ``consumer types`` in placement. While placement defines a consumer to be an entity consuming resources from a provider it does not provide a way to identify similar "types" of consumers and henceforth allow services to group/query them based on their types. This spec proposes to associate each consumer to a particular type defined by the service owning the consumer. Problem description =================== In today's placement world each allocation posted by a service is against a provider for a consumer (ex: for an instance or a migration). However a service may want to distinguish amongst the allocations made against its various types of consumers (ex: nova may want to fetch allocations against instances alone). This is currently not possible in placement and hence the goal is to make placement aware of "types of consumers" for the services. Use Cases --------- * Nova using placement as its `quota calculation system`_: Currently this approach uses the nova_api database to calculate the quota on the "number of instances". In order for nova to be able to use placement to count the number of "instance-consumers", there needs to be a way by which we can differentiate "instance-consumers" from "migration-consumers". * Ironic wanting to differentiate between "standalone-consumer" versus "nova-consumer". Note that it is not within the scope of placement to model the coordination of the consumer type collisions that may arise between multiple services during their definition. Placement will also not be able to identify or verify correct consumer types (eg, INTANCE versus INSTANCE) from the external service's perspective. Proposed change =============== In order to model consumer types in placement, we will add a new ``consumer_types`` table to the placement database which will have two columns: #. an ``id`` which will be of type integer. #. a ``name`` which will be of type varchar (maximum of 255 characters) and this will have a unique constraint on it. The pattern restrictions for the name will be similar to placement traits and resource class names, i.e restricted to only ``^[A-Z0-9_]+$`` with length restrictions being {1, 255}. A sample look of such a table would be: +--------+----------+ | id | name | +========+==========+ | 1 | INSTANCE | +--------+----------+ | 2 | MIGRATION| +--------+----------+ A new column called ``consumer_type_id`` would be added to the ``consumers`` table to map the consumer to its type. The ``POST /allocations`` and ``PUT /allocations/{consumer_uuid}`` REST API's will gain a new (required) key called ``consumer_type`` which is of type string in their request body's through which the caller can specify what type of consumer it is creating or updating the allocations for. If the specified ``consumer_type`` key is not present in the ``consumer_types`` table, a new entry will be created. Also note that once a consumer type is created, it lives on forever. If this becomes a problem in the future for the operators a tool can be provided to clean them up. In order to maintain parity between the request format of ``PUT /allocations/{consumer_uuid}`` and response format of ``GET /allocations/{consumer_uuid}``, the ``consumer_type`` key will also be exposed through the response of ``GET /allocations/{consumer_uuid}`` request. The external services will be able to leverage this ``consumer_type`` key through the ``GET /usages`` REST API which will have a change in the format of its request and response. The request will gain a new optional key called ``consumer_type`` which will enable users to query usages based on the consumer type. The response will group the resource usages by the specified consumer_type (if consumer_type key is not specified it will return the usages for all the consumer_types) meaning it will gain a new ``consumer_type`` key. Per consumer type we will also return a ``consumer_count`` of consumers of that type. See the `REST API impact`_ section for more details on how this would be done. The above REST API changes and the corresponding changes to the ``/reshaper`` REST API will be available from a new microversion. The existing consumers in placement will have a ``NULL`` value in their consumer_type_id field, which means we do not know what type these consumers are and the service to which the consumers belong to needs to update this information if it wants to avail the ``consumer_types`` feature. Alternatives ------------ We could create a new REST API to allow users to create consumer types explicitly but it does not make sense to add a new API for a non-user facing feature. Data model impact ----------------- The placement database will get a new ``consumer_types`` table and the ``consumers`` table will get a new ``consumer_type_id`` column that by default will be ``NULL``. REST API impact --------------- The new ``POST /allocations`` request will look like this:: { "30328d13-e299-4a93-a102-61e4ccabe474": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "consumer_type": "INSTANCE", # This is new "allocations": { "e10927c4-8bc9-465d-ac60-d2f79f7e4a00": { "resources": { "VCPU": 2, "MEMORY_MB": 3 }, "generation": 4 } } }, "71921e4e-1629-4c5b-bf8d-338d915d2ef3": { "consumer_generation": 1, "project_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "user_id": "131d4efb-abc0-4872-9b92-8c8b9dc4320f", "consumer_type": "MIGRATION", # This is new "allocations": {} } } The new ``PUT /allocations/{consumer_uuid}`` request will look like this:: { "allocations": { "4e061c03-611e-4caa-bf26-999dcff4284e": { "resources": { "DISK_GB": 20 } }, "89873422-1373-46e5-b467-f0c5e6acf08f": { "resources": { "MEMORY_MB": 1024, "VCPU": 1 } } }, "consumer_generation": 1, "user_id": "66cb2f29-c86d-47c3-8af5-69ae7b778c70", "project_id": "42a32c07-3eeb-4401-9373-68a8cdca6784", "consumer_type": "INSTANCE" # This is new } Note that ``consumer_type`` is a required key for both these requests at this microversion. The new ``GET /usages`` response will look like this for a request of type ``GET /usages?project_id=&user_id=`` or ``GET /usages?project_id=`` where the consumer_type key is not specified:: { "usages": { "INSTANCE": { "consumer_count": 5, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } "MIGRATION": { "consumer_count": 2, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } "unknown": { "consumer_count": 1, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } } } The new ``GET /usages`` response will look like this for a request of type ``GET /usages?project_id=&user_id=&consumer_type="INSTANCE"`` or ``GET /usages?project_id=&consumer_type="INSTANCE"`` where the consumer_type key is specified:: { "usages": { "INSTANCE": { "consumer_count": 5, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } } } A special request of the form ``GET /usages?project_id=&consumer_type=all`` will be allowed to enable users to be able to query for the total count of all the consumers. The response for such a request will look like this:: { "usages": { "all": { "consumer_count": 3, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } } } A special request of the form ``GET /usages?project_id=&consumer_type=unknown`` will be allowed to enable users to be able to query for the total count of the consumers that have no consumer type assigned. The response for such a request will look like this:: { "usages": { "unknown": { "consumer_count": 3, "DISK_GB": 5, "MEMORY_MB": 512, "VCPU": 2 } } } Note that ``consumer_type`` is an optional key for the ``GET /usages`` request. The above REST API changes and the corresponding changes to the ``/reshaper`` REST API will be available from a new microversion. Security impact --------------- None. Notifications impact -------------------- N/A Other end user impact --------------------- The external services using this feature like nova should take the responsibility of updating the consumer type of existing consumers from ``NULL`` to the actual type through the ``PUT /allocations/{consumer_uuid}`` REST API. Performance Impact ------------------ None. Other deployer impact --------------------- None. Developer impact ---------------- None. Upgrade impact -------------- The ``placement-manage db sync`` command has to be run by the operators in order to upgrade the database schema to accommodate the new changes. Implementation ============== Assignee(s) ----------- Primary assignee: Other contributors: Work Items ---------- * Add the new ``consumer_types`` table and create a new ``consumer_type_id`` column in the ``consumers`` table with a foreign key constraint to the ``id`` column of the ``consumer_types`` table. * Make the REST API changes in a new microversion for: * ``POST /allocations``, * ``PUT /allocations/{consumer_uuid}``, * ``GET /allocations/{consumer_uuid}``, * ``GET /usages`` and * ``/reshaper`` Dependencies ============ None. Testing ======= Unit and functional tests to validate the feature will be added. Documentation Impact ==================== The placement API reference will be updated to reflect the new changes. References ========== .. _quota calculation system: https://review.opendev.org/#/q/topic:bp/count-quota-usage-from-placement History ======= .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - Train - Introduced * - Xena - Reproposed ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2127779 openstack_placement-13.0.0/doc/source/specs/yoga/0000775000175000017500000000000000000000000021756 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2407777 openstack_placement-13.0.0/doc/source/specs/yoga/implemented/0000775000175000017500000000000000000000000024261 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000023100000000000011451 xustar0000000000000000131 path=openstack_placement-13.0.0/doc/source/specs/yoga/implemented/2005345-placement-mixing-required-traits-with-any-traits.rst 22 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/yoga/implemented/2005345-placement-mixing-required-trait0000664000175000017500000001254700000000000033235 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode ============================================== Support mixing required traits with any traits ============================================== https://storyboard.openstack.org/#!/story/2005345 The `any-traits-in-allocation-candidates-query`_ spec proposed to allow querying traits in the form of ``required=in:TRAIT1,TRAIT2``. This spec goes one step further and proposes to allow repeating the ``required`` query parameter to support mixing both ``required=TRAIT1,TRAIT2,!TRAIT3`` and ``required=in:TRAIT1,TRAIT2`` format in a single query. This is needed for Neutron to be able to express that a port needs a resource provider having a specific ``vnic_type`` trait but also having one of the physnet traits the port's network maps to. For example:: GET /allocation_candidates?required1=CUSTOM_VNIC_TYPE_DIRECT& required1=in:CUSTOM_PHYSNET_FOO,CUSTOM_PHYSNET_BAR ... requests a networking device RP in the candidates that supports the ``direct`` ``vnic_type`` and is connected either to ``physnet_foo`` or ``physnet_bar`` or both. Problem description =================== Neutron through Nova needs to be able to query Placement for allocation candidates that are matching to *at least one* trait from the list of traits as well as matching another specific trait in a single query. Use Cases --------- Neutron wants to use this any(traits) query to express that a port's bandwidth resource request needs to be fulfilled by a Network device RP that is connected to one of the physnets the network of the given port is connected to. With Neutron's multiprovider network extension a single Neutron network can consist of multiple network segments connected to different physnets. But at the same time Neutron wants to express that the same RP has a specific vnic_type trait as well. Proposed change =============== Extend the ``GET /allocation_candidates`` and ``GET /resource_providers`` requests to allow repeating the ``required`` and ``required`` query param to support both the ``required=TRAIT1,TRAIT2,!TRAIT3`` and ``required=in:TRAIT1,TRAIT2`` syntax in a single query. Alternatives ------------ None Data model impact ----------------- None REST API impact --------------- In a new microversion the ``GET /allocation_candidates`` and the ``GET /resource_providers`` query should allow repeating the ``required`` query parameter more than once while supporting both normal and any trait syntax in the same query. The ``GET /allocation_candidates`` query having ``required=CUSTOM_VNIC_TYPE_NORMAL& required=in:CUSTOM_PHYSNET1,CUSTOM_PHYSNET2`` parameters should result in allocation candidates where each allocation candidate has the traits ``CUSTOM_VNIC_TYPE_NORMAL`` and either ``CUSTOM_PHYSNET1`` or ``CUSTOM_PHYSNET2`` (or both). The ``GET /resource_providers`` query having ``required=CUSTOM_VNIC_TYPE_NORMAL& required=in:CUSTOM_PHYSNET1,CUSTOM_PHYSNET2`` parameters should result in resource providers where each resource provider has the traits ``CUSTOM_VNIC_TYPE_NORMAL`` and either ``CUSTOM_PHYSNET1`` or ``CUSTOM_PHYSNET2`` (or both). The response body of the ``GET /allocation_candidates`` and ``GET /resource_providers`` query are unchanged. Note the following two queries express exactly the same requirements:: ?required=in:A,B,C &required=X &required=Y &required=Z ?required=in:A,B,C &required=X,Y,Z .. note:: To ease the implementation we might decide to implement this API change in the same microversion as `any-traits-in-allocation-candidates-query`_ implemented in. Security impact --------------- None Notifications impact -------------------- None Other end user impact --------------------- The osc-placement client plugin needs to be updated to support the new Placement API microversion. This means the the CLI should support providing the ``--required`` parameter more than once supporting both normal and any trait syntax. Performance Impact ------------------ None Other deployer impact --------------------- None Developer impact ---------------- None Upgrade impact -------------- None Implementation ============== Assignee(s) ----------- Primary assignee: balazs-gibizer Work Items ---------- * Extend the resource provider and allocation candidate DB query to support more than one set of required traits * Extend the Placement REST API with a new microversion that supports repeating the ``required`` query param * Extend the osc-placement client plugin to support the new microversion Dependencies ============ * The `any-traits-in-allocation-candidates-query`_ spec .. _`any-traits-in-allocation-candidates-query`: https://review.openstack.org/649992 Testing ======= Both new gabbi and functional tests needs to be written for the Placement API change. Also the osc-placement client plugin will need additional functional test coverage. Documentation Impact ==================== The Placement API reference needs to be updated. References ========== None History ======= .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - Rocky - Introduced * - Stein - Reproposed, approved but not implemented * - Train - Reproposed but not approved due to lack of focus * - Yoga - Reproposed ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=openstack_placement-13.0.0/doc/source/specs/yoga/implemented/2005346-any-traits-in-allocation_candidates-query.rst 22 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/yoga/implemented/2005346-any-traits-in-allocation_candid0000664000175000017500000001356200000000000033160 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode ================================================= Support any traits in allocation_candidates query ================================================= https://storyboard.openstack.org/#!/story/2005346 The ``GET /allocation_candidates`` request in Placement supports the ``required`` query parameter. If the caller specifies a list of traits in the ``required`` parameter then placement will limit the returned allocation candidates to those RP trees that fulfill *every* traits in that list. To support minimum bandwidth guarantees in Neutron + Nova we need to be able to query allocation candidates that fulfill *at least one* trait from a list of traits specified in the query. This is required for the case when a Neutron network maps to more than one physnets but the port's bandwidth request can be fulfilled from any physnet the port's network maps to. Problem description =================== Neutron through Nova needs to be able to query Placement for allocation candidates that are matching to *at least one* trait from the list of traits provided in the query. Use Cases --------- Neutron wants to use this any(traits) query to express that a port's bandwidth resource request needs to be fulfilled by a Network device RP that is connected to one of the physnets the network of the given port is connected to. With Neutron's multiprovider network extension a single Neutron network can consist of multiple network segments connected to different physnets. Proposed change =============== Extend the ``GET /allocation_candidates`` and ``GET /resource_providers`` requests with a new ``required=in:TRAIT1,TRAIT2`` query parameter syntax and change the placement implementation to support this new syntax. The `granular-resource-requests`_ spec proposes support for multiple request groups in the Placement query identified by a positive integer postfix in the ``required`` query param. The new ``in:TRAIT1,TRAIT2`` syntax is applicable to the ``required`` query params as well. .. _`granular-resource-requests`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html Alternatives ------------ During the train review Sean suggested to use ``any``, ``all``, ``none`` instead of using the currently proposed ``in:`` syntax. However to keep the API consistent we decided to continue using ``in:`` for traits as it is already used for aggregates. Still we think that ``any``, ``all``, ``none`` would be a better syntax but that requires a separate effort changing the existing query syntax as well. Data model impact ----------------- None REST API impact --------------- Today the ``GET /allocation_candidates`` and ``GET /resource_providers`` query support the ``required`` query param in the form of ``required=TRAIT1,TRAIT2,!TRAIT3``. This spec proposes to implement a new microversion to allow the format of ``required=in:TRAIT1,TRAIT2`` as well as the old format. Each resource provider returned from a request having ``required=in:TRAIT1,TRAIT2`` should have *at least* one matching trait from TRAIT1 and TRAIT2. ``required=in:TRAIT1,TRAIT2`` used in a ``GET /allocation_candidates`` query means that the union of all the traits across all the providers in every allocation candidate must contain at least one of T1, T2. ``requiredX=in:TRAIT1,TRAIT2`` used in a ``GET /allocation_candidates`` query means that the resource provider that satisfies the requirement of the granular request group ``X`` must also has at least one of T1, T2. The response body of the ``GET /allocation_candidates`` and ``GET /resource_providers`` query are unchanged. A separate subsequent spec will propose to support repeating the ``required`` query param more than once to allow mixing the two formats. Note that mixing required and forbidden trait requirements in the same ``required=in:`` query param, like ``required=in:TRAIT1,!TRAIT2`` will not be supported and will result a HTTP 400 response. Security impact --------------- None Notifications impact -------------------- None Other end user impact --------------------- The osc-placement client plugin needs to be updated to support the new Placement API microversion. That plugin currently support the --required CLI parameter accepting a list of traits. So this patch propose to extend that parameter to accept in:TRAIT1,TRAIT2 format. Performance Impact ------------------ None Other deployer impact --------------------- None Developer impact ---------------- None Upgrade impact -------------- None Implementation ============== Assignee(s) ----------- Primary assignee: balazs-gibizer Work Items ---------- * Extend the resource provider and allocation candidate DB query to support the new type of query * Extend the Placement REST API with a new microversion that supports the any trait syntax * Extend the osc-placement client plugin to support the new microversion Dependencies ============ * the osc-placement client plugin can only be extended with the new microversion support if every older microversion is already supported which is not the case today. Testing ======= Both new gabbi and functional tests needs to be written for the Placement API change. Also the osc-placement client plugin will need additional functional test coverage. Documentation Impact ==================== The Placement API reference needs to be updated. References ========== * osc-placement `review`_ series adding support for latest Placement microversions .. _`review`: https://review.openstack.org/#/c/548326 History ======= .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - Rocky - Introduced * - Stein - Reproposed, approved but not implemented * - Train - Reproposed but not approved due to lack of focus * - Yoga - Reproposed ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2127779 openstack_placement-13.0.0/doc/source/specs/zed/0000775000175000017500000000000000000000000021601 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2407777 openstack_placement-13.0.0/doc/source/specs/zed/approved/0000775000175000017500000000000000000000000023421 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/specs/zed/approved/template.rst0000664000175000017500000002650700000000000026000 0ustar00zuulzuul00000000000000.. This work is licensed under a Creative Commons Attribution 3.0 Unported License. http://creativecommons.org/licenses/by/3.0/legalcode ======================== Example Spec - The title ======================== Include the URL of your story from StoryBoard: https://storyboard.openstack.org/#!/story/XXXXXXX Introduction paragraph -- why are we doing anything? A single paragraph of prose that operators can understand. The title and this first paragraph should be used as the subject line and body of the commit message respectively. Some notes about the spec process: * Not all blueprints need a spec, start with a story. * The aim of this document is first to define the problem we need to solve, and second agree the overall approach to solve that problem. * This is not intended to be extensive documentation for a new feature. For example, there is no need to specify the exact configuration changes, nor the exact details of any DB model changes. But you should still define that such changes are required, and be clear on how that will affect upgrades. * You should aim to get your spec approved before writing your code. While you are free to write prototypes and code before getting your spec approved, its possible that the outcome of the spec review process leads you towards a fundamentally different solution than you first envisaged. * But API changes are held to a much higher level of scrutiny. As soon as an API change merges, we must assume it could be in production somewhere, and as such, we then need to support that API change forever. To avoid getting that wrong, we do want lots of details about API changes up front. Some notes about using this template: * Your spec should be in ReSTructured text, like this template. * Please wrap text at 79 columns. * The filename in the git repository should start with the StoryBoard story number. For example: ``2005171-allocation-partitioning.rst``. * Please do not delete any of the sections in this template. If you have nothing to say for a whole section, just write: None * For help with syntax, see http://sphinx-doc.org/rest.html * To test out your formatting, build the docs using ``tox -e docs`` and see the generated HTML file in doc/build/html/specs/. The generated file will have an ``.html`` extension where the original has ``.rst``. * If you would like to provide a diagram with your spec, ascii diagrams are often the best choice. http://asciiflow.com/ is a useful tool. If ascii is insufficient, you have the option to use seqdiag_ or actdiag_. .. _seqdiag: http://blockdiag.com/en/seqdiag/index.html .. _actdiag: http://blockdiag.com/en/actdiag/index.html Problem description =================== A detailed description of the problem. What problem is this feature addressing? Use Cases --------- What use cases does this address? What impact on actors does this change have? Ensure you are clear about the actors in each use case: Developer, End User, Deployer etc. Proposed change =============== Here is where you cover the change you propose to make in detail. How do you propose to solve this problem? If this is one part of a larger effort make it clear where this piece ends. In other words, what's the scope of this effort? At this point, if you would like to get feedback on if the problem and proposed change fit in placement, you can stop here and post this for review saying: Posting to get preliminary feedback on the scope of this spec. Alternatives ------------ What other ways could we do this thing? Why aren't we using those? This doesn't have to be a full literature review, but it should demonstrate that thought has been put into why the proposed solution is an appropriate one. Data model impact ----------------- Changes which require modifications to the data model often have a wider impact on the system. The community often has strong opinions on how the data model should be evolved, from both a functional and performance perspective. It is therefore important to capture and gain agreement as early as possible on any proposed changes to the data model. Questions which need to be addressed by this section include: * What new data objects and/or database schema changes is this going to require? * What database migrations will accompany this change? * How will the initial set of new data objects be generated? For example if you need to take into account existing instances, or modify other existing data, describe how that will work. API impact ---------- Each API method which is either added or changed should have the following * Specification for the method * A description of what the method does suitable for use in user documentation * Method type (POST/PUT/GET/DELETE) * Normal http response code(s) * Expected error http response code(s) * A description for each possible error code should be included describing semantic errors which can cause it such as inconsistent parameters supplied to the method, or when a resource is not in an appropriate state for the request to succeed. Errors caused by syntactic problems covered by the JSON schema definition do not need to be included. * URL for the resource * URL should not include underscores; use hyphens instead. * Parameters which can be passed via the url * JSON schema definition for the request body data if allowed * Field names should use snake_case style, not camelCase or MixedCase style. * JSON schema definition for the response body data if any * Field names should use snake_case style, not camelCase or MixedCase style. * Example use case including typical API samples for both data supplied by the caller and the response * Discuss any policy changes, and discuss what things a deployer needs to think about when defining their policy. Note that the schema should be defined as restrictively as possible. Parameters which are required should be marked as such and only under exceptional circumstances should additional parameters which are not defined in the schema be permitted (eg additionalProperties should be False). Reuse of existing predefined parameter types such as regexps for passwords and user defined names is highly encouraged. Security impact --------------- Describe any potential security impact on the system. Some of the items to consider include: * Does this change touch sensitive data such as tokens, keys, or user data? * Does this change alter the API in a way that may impact security, such as a new way to access sensitive information or a new way to log in? * Does this change involve cryptography or hashing? * Does this change require the use of sudo or any elevated privileges? * Does this change involve using or parsing user-provided data? This could be directly at the API level or indirectly such as changes to a cache layer. * Can this change enable a resource exhaustion attack, such as allowing a single API interaction to consume significant server resources? Some examples of this include launching subprocesses for each connection, or entity expansion attacks in XML. For more detailed guidance, please see the OpenStack Security Guidelines as a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These guidelines are a work in progress and are designed to help you identify security best practices. For further information, feel free to reach out to the OpenStack Security Group at openstack-security@lists.openstack.org. Other end user impact --------------------- Aside from the API, are there other ways a user will interact with this feature? * Does this change have an impact on osc-placement? What does the user interface there look like? Performance Impact ------------------ Describe any potential performance impact on the system, for example how often will new code be called, and is there a major change to the calling pattern of existing code. Examples of things to consider here include: * A small change in a utility function or a commonly used decorator can have a large impacts on performance. * Calls which result in a database queries can have a profound impact on performance when called in critical sections of the code. * Will the change include any locking, and if so what considerations are there on holding the lock? Other deployer impact --------------------- Discuss things that will affect how you deploy and configure OpenStack that have not already been mentioned, such as: * What config options are being added? Should they be more generic than proposed? Are the default values ones which will work well in real deployments? * Is this a change that takes immediate effect after its merged, or is it something that has to be explicitly enabled? * If this change is a new binary, how would it be deployed? * Please state anything that those doing continuous deployment, or those upgrading from the previous release, need to be aware of. Also describe any plans to deprecate configuration values or features. Developer impact ---------------- Discuss things that will affect other developers working on OpenStack. Upgrade impact -------------- Describe any potential upgrade impact on the system. Implementation ============== Assignee(s) ----------- Who is leading the writing of the code? Or is this a blueprint where you're throwing it out there to see who picks it up? If more than one person is working on the implementation, please designate the primary author and contact. Primary assignee: Other contributors: Work Items ---------- Work items or tasks -- break the feature up into the things that need to be done to implement it. Those parts might end up being done by different people, but we're mostly trying to understand the timeline for implementation. Dependencies ============ * Include specific references to other specs or stories that this one either depends on or is related to. * If this requires new functionality in another project that is not yet used document that fact. * Does this feature require any new library dependencies or code otherwise not included in OpenStack? Or does it depend on a specific version of a library? Testing ======= Please discuss the important scenarios that need to be tested, as well as specific edge cases we should be ensuring work correctly. Documentation Impact ==================== Which audiences are affected most by this change, and which documentation titles on docs.openstack.org should be updated because of this change? Don't repeat details discussed above, but reference them here in the context of documentation for multiple audiences. References ========== Please add any useful references here. You are not required to have any references. Moreover, this specification should still make sense when your references are unavailable. Examples of what you could include are: * Links to mailing list or IRC discussions * Links to notes from a summit session * Links to relevant research, if appropriate * Anything else you feel it is worthwhile to refer to History ======= Optional section intended to be used each time the spec is updated to describe new design, API or any database schema updated. Useful to let the reader understand how the spec has changed over time. .. list-table:: Revisions :header-rows: 1 * - Release Name - Description * - - Introduced ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2407777 openstack_placement-13.0.0/doc/source/user/0000775000175000017500000000000000000000000020660 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/user/index.rst0000664000175000017500000001112600000000000022522 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================= Placement Usage ================= Tracking Resources ================== The placement service enables other projects to track their own resources. Those projects can register/delete their own resources to/from placement via the placement `HTTP API`_. The placement service originated in the :nova-doc:`Nova project `. As a result much of the functionality in placement was driven by nova's requirements. However, that functionality was designed to be sufficiently generic to be used by any service that needs to manage the selection and consumption of resources. How Nova Uses Placement ----------------------- Two processes, ``nova-compute`` and ``nova-scheduler``, host most of nova's interaction with placement. The nova resource tracker in ``nova-compute`` is responsible for `creating the resource provider`_ record corresponding to the compute host on which the resource tracker runs, `setting the inventory`_ that describes the quantitative resources that are available for workloads to consume (e.g., ``VCPU``), and `setting the traits`_ that describe qualitative aspects of the resources (e.g., ``STORAGE_DISK_SSD``). If other projects -- for example, Neutron or Cyborg -- wish to manage resources on a compute host, they should create resource providers as children of the compute host provider and register their own managed resources as inventory on those child providers. For more information, see the :doc:`Modeling with Provider Trees `. The ``nova-scheduler`` is responsible for selecting a set of suitable destination hosts for a workload. It begins by formulating a request to placement for a list of `allocation candidates`_. That request expresses quantitative and qualitative requirements, membership in aggregates, and in more complex cases, the topology of related resources. That list is reduced and ordered by filters and weighers within the scheduler process. An `allocation`_ is made against a resource provider representing a destination, consuming a portion of the inventory set by the resource tracker. .. toctree:: :hidden: provider-tree .. _HTTP API: https://docs.openstack.org/api-ref/placement/ .. _creating the resource provider: https://docs.openstack.org/api-ref/placement/?expanded=create-resource-provider-detail#create-resource-provider .. _setting the inventory: https://docs.openstack.org/api-ref/placement/?expanded=update-resource-provider-inventories-detail#update-resource-provider-inventories .. _setting the traits: https://docs.openstack.org/api-ref/placement/?expanded=update-resource-provider-traits-detail#update-resource-provider-traits .. _allocation candidates: https://docs.openstack.org/api-ref/placement/?expanded=list-allocation-candidates-detail#list-allocation-candidates .. _allocation: https://docs.openstack.org/api-ref/placement/?expanded=update-allocations-detail#update-allocations REST API ======== The placement API service provides a well-documented, JSON-based `HTTP API`_ and data model. It is designed to be easy to use from whatever HTTP client is suitable. There is a plugin to the openstackclient_ command line tool called osc-placement_ which is useful for occasional inspection and manipulation of the resources in the placement service. .. _HTTP API: https://docs.openstack.org/api-ref/placement/ .. _openstackclient: https://pypi.org/project/openstackclient/ .. _osc-placement: https://pypi.org/project/osc-placement/ Microversions ------------- The placement API uses microversions for making incremental changes to the API which client requests must opt into. It is especially important to keep in mind that nova-compute is a client of the placement REST API and based on how Nova supports rolling upgrades the nova-compute service could be Newton level code making requests to an Ocata placement API, and vice-versa, an Ocata compute service in a cells v2 cell could be making requests to a Newton placement API. This history of placement microversions may be found in the following subsection. .. toctree:: :maxdepth: 2 ../placement-api-microversion-history ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/source/user/provider-tree.rst0000664000175000017500000007264500000000000024217 0ustar00zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================== Modeling with Provider Trees ============================== Overview ======== Placement supports modeling a hierarchical relationship between different resource providers. While a parent provider can have multiple child providers, a child provider can belong to only one parent provider. Therefore, the whole architecture can be considered as a "tree" structure, and the resource provider on top of the "tree" is called a "root provider". (See the `Nested Resource Providers`_ spec for details.) Modeling the relationship is done by specifying a parent provider via the `POST /resource_providers`_ operation when creating a resource provider. .. note:: If the parent provider hasn't been set, you can also parent a resource provider after the creation via the `PUT /resource_providers/{uuid}`_ operation. But re-parenting a resource provider is not supported. The resource providers in a tree -- and sharing providers as described in the next section -- can be returned in a single allocation request in the response of the `GET /allocation_candidates`_ operation. This means that the placement service looks up a resource provider tree in which resource providers can *collectively* contain all of the requested resources. This document describes some case studies to explain how sharing providers, aggregates, and traits work if provider trees are involved in the `GET /allocation_candidates`_ operation. Sharing Resource Providers ========================== Resources on sharing resource providers can be shared by multiple resource provider trees. This means that a sharing provider can be in one allocation request with resource providers from a different tree in the response of the `GET /allocation_candidates`_ operation. As an example, this may be used for shared storage that is connected to multiple compute hosts. .. note:: Technically, a resource provider with the ``MISC_SHARES_VIA_AGGREGATE`` trait becomes a sharing resource provider and the resources on it are shared by other resource providers in the same aggregate. For example, let's say we have the following environment:: +-------------------------------+ +-------------------------------+ | Sharing Storage (SS1) | | Sharing Storage (SS2) | | resources: | | resources: | | DISK_GB: 1000 | | DISK_GB: 1000 | | aggregate: [aggA] | | aggregate: [] | | trait: | | trait: | | [MISC_SHARES_VIA_AGGREGATE] | | [MISC_SHARES_VIA_AGGREGATE] | +---------------+---------------+ +-------------------------------+ | Shared via aggA +-----------+-----------+ +-----------------------+ | Compute Node (CN1) | | Compute Node (CN2) | | resources: | | resources: | | VCPU: 8 | | VCPU: 8 | | MEMORY_MB: 1024 | | MEMORY_MB: 1024 | | DISK_GB: 1000 | | DISK_GB: 1000 | | aggregate: [aggA] | | aggregate: [] | | trait: [] | | trait: [] | +-----------------------+ +-----------------------+ Assuming no allocations have yet been made against any of the resource providers, the request:: GET /allocation_candidates?resources=VCPU:1,MEMORY_MB:512,DISK_GB:500 would return three combinations as the allocation candidates. 1. ``CN1`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) 2. ``CN2`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) 3. ``CN1`` (``VCPU``, ``MEMORY_MB``) + ``SS1`` (``DISK_GB``) ``SS2`` is also a sharing provider, but not in the allocation candidates because it can't satisfy the resource itself and it isn't in any aggregate, so it is not shared by any resource providers. When a provider tree structure is present, sharing providers are shared by the whole tree if one of the resource providers from the tree is connected to the sharing provider via an aggregate. For example, let's say we have the following environment where NUMA resource providers are child providers of the compute host resource providers:: +------------------------------+ | Sharing Storage (SS1) | | resources: | | DISK_GB: 1000 | | agg: [aggA] | | trait: | | [MISC_SHARES_VIA_AGGREGATE]| +--------------+---------------+ | aggA +--------------------------------+ | +--------------------------------+ | +--------------------------+ | | | +--------------------------+ | | | Compute Node (CN1) | | | | | Compute Node (CN2) | | | | resources: +-----+-----+ resources: | | | | MEMORY_MB: 1024 | | | | MEMORY_MB: 1024 | | | | DISK_GB: 1000 | | | | DISK_GB: 1000 | | | | agg: [aggA, aggB] | | | | agg: [aggA] | | | +-----+-------------+------+ | | +-----+-------------+------+ | | | nested | nested | | | nested | nested | | +-----+------+ +----+------+ | | +-----+------+ +----+------+ | | | NUMA1_1 | | NUMA1_2 | | | | NUMA2_1 | | NUMA2_2 | | | | VCPU: 8 | | VCPU: 8 | | | | VCPU: 8 | | VCPU: 8 | | | | agg:[] | | agg:[] | | | | agg:[aggB]| | agg:[] | | | +------------+ +-----------+ | | +------------+ +-----------+ | +--------------------------------+ +--------------------------------+ Assuming no allocations have yet been made against any of the resource providers, the request:: GET /allocation_candidates?resources=VCPU:1,MEMORY_MB:512,DISK_GB:500 would return eight combinations as the allocation candidates. 1. ``NUMA1_1`` (``VCPU``) + ``CN1`` (``MEMORY_MB``, ``DISK_GB``) 2. ``NUMA1_2`` (``VCPU``) + ``CN1`` (``MEMORY_MB``, ``DISK_GB``) 3. ``NUMA2_1`` (``VCPU``) + ``CN2`` (``MEMORY_MB``, ``DISK_GB``) 4. ``NUMA2_2`` (``VCPU``) + ``CN2`` (``MEMORY_MB``, ``DISK_GB``) 5. ``NUMA1_1`` (``VCPU``) + ``CN1`` (``MEMORY_MB``) + ``SS1`` (``DISK_GB``) 6. ``NUMA1_2`` (``VCPU``) + ``CN1`` (``MEMORY_MB``) + ``SS1`` (``DISK_GB``) 7. ``NUMA2_1`` (``VCPU``) + ``CN2`` (``MEMORY_MB``) + ``SS1`` (``DISK_GB``) 8. ``NUMA2_2`` (``VCPU``) + ``CN2`` (``MEMORY_MB``) + ``SS1`` (``DISK_GB``) Note that ``NUMA1_1`` and ``SS1``, for example, are not in the same aggregate, but they can be in one allocation request since the tree of ``CN1`` is connected to ``SS1`` via aggregate A on ``CN1``. Filtering Aggregates ==================== What differs between the ``CN1`` and ``CN2`` in the example above emerges when you specify the aggregate explicitly in the `GET /allocation_candidates`_ operation with the ``member_of`` query parameter. The ``member_of`` query parameter accepts aggregate uuids and filters candidates to the resource providers in the given aggregate. See the `Filtering by Aggregate Membership`_ spec for details. Note that the `GET /allocation_candidates`_ operation assumes that "an aggregate on a root provider spans the whole tree, while an aggregate on a non-root provider does NOT span the whole tree." For example, in the environment above, the request:: GET /allocation_candidates?resources=VCPU:1,MEMORY_MB:512,DISK_GB:500&member_of= would return eight candidates, 1. ``NUMA1_1`` (``VCPU``) + ``CN1`` (``MEMORY_MB``, ``DISK_GB``) 2. ``NUMA1_2`` (``VCPU``) + ``CN1`` (``MEMORY_MB``, ``DISK_GB``) 3. ``NUMA2_1`` (``VCPU``) + ``CN2`` (``MEMORY_MB``, ``DISK_GB``) 4. ``NUMA2_2`` (``VCPU``) + ``CN2`` (``MEMORY_MB``, ``DISK_GB``) 5. ``NUMA1_1`` (``VCPU``) + ``CN1`` (``MEMORY_MB``) + ``SS1`` (``DISK_GB``) 6. ``NUMA1_2`` (``VCPU``) + ``CN1`` (``MEMORY_MB``) + ``SS1`` (``DISK_GB``) 7. ``NUMA2_1`` (``VCPU``) + ``CN2`` (``MEMORY_MB``) + ``SS1`` (``DISK_GB``) 8. ``NUMA2_2`` (``VCPU``) + ``CN2`` (``MEMORY_MB``) + ``SS1`` (``DISK_GB``) This is because aggregate A is on the root providers, ``CN1`` and ``CN2``, so the API assumes the child providers ``NUMA1_1``, ``NUMA1_2``, ``NUMA2_1`` and ``NUMA2_2`` are also in the aggregate A. Specifying aggregate B:: GET /allocation_candidates?resources=VCPU:1,MEMORY_MB:512,DISK_GB:500&member_of= would return two candidates. 1. ``NUMA1_1`` (``VCPU``) + ``CN1`` (``MEMORY_MB``, ``DISK_GB``) 2. ``NUMA1_2`` (``VCPU``) + ``CN1`` (``MEMORY_MB``, ``DISK_GB``) This is because ``SS1`` is not in aggregate B, and because aggregate B on ``NUMA2_1`` doesn't span the whole tree since the ``NUMA2_1`` resource provider isn't a root resource provider. Filtering by Traits =================== Traits are not only used to indicate sharing providers. They are used to denote capabilities of resource providers. (See `The Traits API`_ spec for details.) Traits can be requested explicitly in the `GET /allocation_candidates`_ operation with the ``required`` query parameter, but traits on resource providers never span other resource providers. If a trait is requested, one of the resource providers that appears in the allocation candidate should have the trait regardless of sharing or nested providers. See the `Request Traits`_ spec for details. The ``required`` query parameter also supports negative expression, via the ``!`` prefix, for forbidden traits. If a forbidden trait is specified, none of the resource providers that appear in the allocation candidate may have that trait. See the `Forbidden Traits`_ spec for details. The ``required`` parameter also supports the syntax ``in:T1,T2,...`` which means we are looking for resource providers that have either T1 or T2 traits on them. The two trait query syntax can be combined by repeating the ``required`` query parameter. So querying providers having (T1 or T2) and T3 and not T4 can be expressed with ``required=in:T1,T2&required=T3,!T4``. For example, let's say we have the following environment:: +----------------------------------------------------+ | +----------------------------------------------+ | | | Compute Node (CN1) | | | | resources: | | | | VCPU: 8, MEMORY_MB: 1024, DISK_GB: 1000 | | | | trait: [] | | | +----------+------------------------+----------+ | | | nested | nested | | +----------+-----------+ +----------+----------+ | | | NIC1_1 | | NIC1_2 | | | | resources: | | resources: | | | | SRIOV_NET_VF:8 | | SRIOV_NET_VF:8 | | | | trait: | | trait: | | | | [HW_NIC_ACCEL_SSL]| | [] | | | +----------------------+ +---------------------+ | +----------------------------------------------------+ Assuming no allocations have yet been made against any of the resource providers, the request:: GET /allocation_candidates?resources=VCPU:1,MEMORY_MB:512,DISK_GB:500,SRIOV_NET_VF:2 &required=HW_NIC_ACCEL_SSL would return only ``NIC1_1`` for ``SRIOV_NET_VF``. As a result, we get one candidate. 1. ``CN1`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) + ``NIC1_1`` (``SRIOV_NET_VF``) In contrast, for forbidden traits:: GET /allocation_candidates?resources=VCPU:1,MEMORY_MB:512,DISK_GB:500,SRIOV_NET_VF:2 &required=!HW_NIC_ACCEL_SSL would exclude ``NIC1_1`` for ``SRIOV_NET_VF``. 1. ``CN1`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) + ``NIC1_2`` (``SRIOV_NET_VF``) If the trait is not in the ``required`` parameter, that trait will simply be ignored in the `GET /allocation_candidates`_ operation. For example:: GET /allocation_candidates?resources=VCPU:1,MEMORY_MB:512,DISK_GB:500,SRIOV_NET_VF:2 would return two candidates. 1. ``CN1`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) + ``NIC1_1`` (``SRIOV_NET_VF``) 2. ``CN1`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) + ``NIC1_2`` (``SRIOV_NET_VF``) Granular Resource Requests ========================== If you want to get the same kind of resources from multiple resource providers at once, or if you require a provider of a particular requested resource class to have a specific trait or aggregate membership, you can use the `Granular Resource Request`_ feature. This feature is enabled by numbering the ``resources``, ``member_of`` and ``required`` query parameters respectively. For example, in the environment above, the request:: GET /allocation_candidates?resources=VCPU:1,MEMORY_MB:512,DISK_GB:500 &resources1=SRIOV_NET_VF:1&required1=HW_NIC_ACCEL_SSL &resources2=SRIOV_NET_VF:1 &group_policy=isolate would return one candidate where two providers serve ``SRIOV_NET_VF`` resource. 1. ``CN1`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) + ``NIC1_1`` (``SRIOV_NET_VF:1``) + ``NIC1_2`` (``SRIOV_NET_VF:1``) The ``group_policy=isolate`` ensures that the one resource is from a provider with the ``HW_NIC_ACCEL_SSL`` trait and the other is from *another* provider with no trait constraints. If the ``group_policy`` is set to ``none``, it allows multiple granular requests to be served by one provider. Namely:: GET /allocation_candidates?resources=VCPU:1,MEMORY_MB:512,DISK_GB:500 &resources1=SRIOV_NET_VF:1&required1=HW_NIC_ACCEL_SSL &resources2=SRIOV_NET_VF:1 &group_policy=none would return two candidates. 1. ``CN1`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) + ``NIC1_1`` (``SRIOV_NET_VF:1``) + ``NIC1_2`` (``SRIOV_NET_VF:1``) 2. ``CN1`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) + ``NIC1_1`` (``SRIOV_NET_VF:2``) This is because ``NIC1_1`` satisfies both request 1 (with ``HW_NIC_ACCEL_SSL`` trait) and request 2 (with no trait constraints). Note that if ``member_of`` is specified in granular requests, the API doesn't assume that "an aggregate on a root provider spans the whole tree." It just sees whether the specified aggregate is directly associated with the resource provider when looking up the candidates. Filtering by Tree ================= If you want to filter the result by a specific provider tree, use the `Filter Allocation Candidates by Provider Tree`_ feature with the ``in_tree`` query parameter. For example, let's say we have the following environment:: +-----------------------+ +-----------------------+ | Sharing Storage (SS1) | | Sharing Storage (SS2) | | DISK_GB: 1000 | | DISK_GB: 1000 | +-----------+-----------+ +-----------+-----------+ | | +-----------------+----------------+ | Shared via an aggregate +-----------------+----------------+ | | +--------------|---------------+ +--------------|--------------+ | +------------+-------------+ | | +------------+------------+ | | | Compute Node (CN1) | | | | Compute Node (CN2) | | | | DISK_GB: 1000 | | | | DISK_GB: 1000 | | | +-----+-------------+------+ | | +----+-------------+------+ | | | nested | nested | | | nested | nested | | +-----+------+ +----+------+ | | +----+------+ +----+------+ | | | NUMA1_1 | | NUMA1_2 | | | | NUMA2_1 | | NUMA2_2 | | | | VCPU: 4 | | VCPU: 4 | | | | VCPU: 4 | | VCPU: 4 | | | +------------+ +-----------+ | | +-----------+ +-----------+ | +------------------------------+ +-----------------------------+ The request:: GET /allocation_candidates?resources=VCPU:1,DISK_GB:50&in_tree= will filter out candidates by ``CN1`` and return 2 combinations of allocation candidates. 1. ``NUMA1_1`` (``VCPU``) + ``CN1`` (``DISK_GB``) 2. ``NUMA1_2`` (``VCPU``) + ``CN1`` (``DISK_GB``) The specified tree can be a non-root provider. The request:: GET /allocation_candidates?resources=VCPU:1,DISK_GB:50&in_tree= will return the same result being aware of resource providers in the same tree with ``NUMA1_1`` resource provider. 1. ``NUMA1_1`` (``VCPU``) + ``CN1`` (``DISK_GB``) 2. ``NUMA1_2`` (``VCPU``) + ``CN1`` (``DISK_GB``) .. note:: We don't exclude ``NUMA1_2`` in the case above. That kind of feature is proposed separately and in progress. See the `Support subtree filter`_ specification for details. The suffixed syntax ``in_tree<$S>`` (where ``$S`` is a number in microversions ``1.25-1.32`` and ``[a-zA-Z0-9_-]{1,64}`` from ``1.33``) is also supported according to `Granular Resource Requests`_. This restricts providers satisfying the suffixed granular request group to the tree of the specified provider. For example, in the environment above, when you want to have ``VCPU`` from ``CN1`` and ``DISK_GB`` from wherever, the request may look like:: GET /allocation_candidates?resources=VCPU:1&in_tree= &resources1=DISK_GB:10 which will return the sharing providers as well as the local disk. 1. ``NUMA1_1`` (``VCPU``) + ``CN1`` (``DISK_GB``) 2. ``NUMA1_2`` (``VCPU``) + ``CN1`` (``DISK_GB``) 3. ``NUMA1_1`` (``VCPU``) + ``SS1`` (``DISK_GB``) 4. ``NUMA1_2`` (``VCPU``) + ``SS1`` (``DISK_GB``) 5. ``NUMA1_1`` (``VCPU``) + ``SS2`` (``DISK_GB``) 6. ``NUMA1_2`` (``VCPU``) + ``SS2`` (``DISK_GB``) This is because the unsuffixed ``in_tree`` is applied to only the unsuffixed resource of ``VCPU``, and not applied to the suffixed resource, ``DISK_GB``. When you want to have ``VCPU`` from wherever and ``DISK_GB`` from ``SS1``, the request may look like:: GET /allocation_candidates?resources=VCPU:1 &resources1=DISK_GB:10&in_tree1= which will stick to the first sharing provider for ``DISK_GB``. 1. ``NUMA1_1`` (``VCPU``) + ``SS1`` (``DISK_GB``) 2. ``NUMA1_2`` (``VCPU``) + ``SS1`` (``DISK_GB``) 3. ``NUMA2_1`` (``VCPU``) + ``SS1`` (``DISK_GB``) 4. ``NUMA2_2`` (``VCPU``) + ``SS1`` (``DISK_GB``) When you want to have ``VCPU`` from ``CN1`` and ``DISK_GB`` from ``SS1``, the request may look like:: GET /allocation_candidates?resources1=VCPU:1&in_tree1= &resources2=DISK_GB:10&in_tree2= &group_policy=isolate which will return only 2 candidates. 1. ``NUMA1_1`` (``VCPU``) + ``SS1`` (``DISK_GB``) 2. ``NUMA1_2`` (``VCPU``) + ``SS1`` (``DISK_GB``) .. _`filtering by root provider traits`: Filtering by Root Provider Traits ================================= When traits are associated with a particular resource, the provider tree should be constructed such that the traits are associated with the provider possessing the inventory of that resource. For example, trait ``HW_CPU_X86_AVX2`` is a trait associated with the ``VCPU`` resource, so it should be placed on the resource provider with ``VCPU`` inventory, wherever that provider is positioned in the tree structure. (A NUMA-aware host may model ``VCPU`` inventory in a child provider, whereas a non-NUMA-aware host may model it in the root provider.) On the other hand, some traits are associated not with a resource, but with the provider itself. For example, a compute host may be capable of ``COMPUTE_VOLUME_MULTI_ATTACH``, or be associated with a ``CUSTOM_WINDOWS_LICENSE_POOL``. In this case it is recommended that the root resource provider be used to represent the concept of the "compute host"; so these kinds of traits should always be placed on the root resource provider. The following environment illustrates the above concepts:: +---------------------------------+ +-------------------------------------------+ |+-------------------------------+| | +-------------------------------+ | || Compute Node (NON_NUMA_CN) || | | Compute Node (NUMA_CN) | | || VCPU: 8, || | | DISK_GB: 1000 | | || MEMORY_MB: 1024 || | | traits: | | || DISK_GB: 1000 || | | STORAGE_DISK_SSD, | | || traits: || | | COMPUTE_VOLUME_MULTI_ATTACH | | || HW_CPU_X86_AVX2, || | +-------+-------------+---------+ | || STORAGE_DISK_SSD, || | nested | | nested | || COMPUTE_VOLUME_MULTI_ATTACH, || |+-----------+-------+ +---+---------------+| || CUSTOM_WINDOWS_LICENSE_POOL || || NUMA1 | | NUMA2 || |+-------------------------------+| || VCPU: 4 | | VCPU: 4 || +---------------------------------+ || MEMORY_MB: 1024 | | MEMORY_MB: 1024 || || | | traits: || || | | HW_CPU_X86_AVX2 || |+-------------------+ +-------------------+| +-------------------------------------------+ A tree modeled in this fashion can take advantage of the `root_required`_ query parameter to return only allocation candidates from trees which possess (or do not possess) specific traits on their root provider. For example, to return allocation candidates including ``VCPU`` with the ``HW_CPU_X86_AVX2`` instruction set from hosts capable of ``COMPUTE_VOLUME_MULTI_ATTACH``, a request may look like:: GET /allocation_candidates ?resources1=VCPU:1,MEMORY_MB:512&required1=HW_CPU_X86_AVX2 &resources2=DISK_GB:100 &group_policy=none &root_required=COMPUTE_VOLUME_MULTI_ATTACH This will return results from both ``NUMA_CN`` and ``NON_NUMA_CN`` because both have the ``COMPUTE_VOLUME_MULTI_ATTACH`` trait on the root provider; but only ``NUMA2`` has ``HW_CPU_X86_AVX2`` so there will only be one result from ``NUMA_CN``. 1. ``NON_NUMA_CN`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``) 2. ``NUMA_CN`` (``DISK_GB``) + ``NUMA2`` (``VCPU``, ``MEMORY_MB``) To restrict allocation candidates to only those not in your ``CUSTOM_WINDOWS_LICENSE_POOL``, a request may look like:: GET /allocation_candidates ?resources1=VCPU:1,MEMORY_MB:512 &resources2=DISK_GB:100 &group_policy=none &root_required=!CUSTOM_WINDOWS_LICENSE_POOL This will return results only from ``NUMA_CN`` because ``NON_NUMA_CN`` has the forbidden ``CUSTOM_WINDOWS_LICENSE_POOL`` on the root provider. 1. ``NUMA_CN`` (``DISK_GB``) + ``NUMA1`` (``VCPU``, ``MEMORY_MB``) 2. ``NUMA_CN`` (``DISK_GB``) + ``NUMA2`` (``VCPU``, ``MEMORY_MB``) The syntax of the ``root_required`` query parameter is identical to that of ``required[$S]``: multiple trait strings may be specified, separated by commas, each optionally prefixed with ``!`` to indicate that it is forbidden. .. note:: ``root_required`` may not be suffixed, and may be specified only once, as it applies only to the root provider. .. note:: When sharing providers are involved in the request, ``root_required`` applies only to the root of the non-sharing provider tree. .. note:: While the ``required`` param supports the any-traits query with the ``in:`` prefix syntax since microversion 1.39 the ``root_required`` parameter does not support it yet. Filtering by Same Subtree ========================= If you want to express affinity among allocations in separate request groups, use the `same_subtree`_ query parameter. It accepts a comma-separated list of request group suffix strings ($S). Each must exactly match a suffix on a granular group somewhere else in the request. If this is provided, at least one of the resource providers satisfying a specified request group must be an ancestor of the rest. For example, given a model like:: +---------------------------+ | Compute Node (CN) | +-------------+-------------+ | +--------------------+-------------------+ | | +-----------+-----------+ +-----------+-----------+ | NUMA NODE (NUMA0) | | NUMA NODE (NUMA1) | | VCPU: 4 | | VCPU: 4 | | MEMORY_MB: 2048 | | MEMORY_MB: 2048 | | traits: | | traits: | | HW_NUMA_ROOT | | HW_NUMA_ROOT | +-----------+-----------+ +----+-------------+----+ | | | +-----------+-----------+ +----------------+-----+ +-----+----------------+ | FPGA (FPGA0_0) | | FPGA (FPGA1_0) | | FPGA (FPGA1_1) | | ACCELERATOR_FPGA:1 | | ACCELERATOR_FPGA:1 | | ACCELERATOR_FPGA:1 | | traits: | | traits: | | traits: | | CUSTOM_TYPE1 | | CUSTOM_TYPE1 | | CUSTOM_TYPE2 | +-----------------------+ +----------------------+ +----------------------+ To request FPGAs on the same NUMA node with VCPUs and MEMORY, a request may look like:: GET /allocation_candidates ?resources_COMPUTE=VCPU:1,MEMORY_MB:256 &resources_ACCEL=ACCELERATOR_FPGA:1 &group_policy=none &same_subtree=_COMPUTE,_ACCEL This will produce candidates including: 1. ``NUMA0`` (``VCPU``, ``MEMORY_MB``) + ``FPGA0_0`` (``ACCELERATOR_FPGA``) 2. ``NUMA1`` (``VCPU``, ``MEMORY_MB``) + ``FPGA1_0`` (``ACCELERATOR_FPGA``) 3. ``NUMA1`` (``VCPU``, ``MEMORY_MB``) + ``FPGA1_1`` (``ACCELERATOR_FPGA``) but not: 4. ``NUMA0`` (``VCPU``, ``MEMORY_MB``) + ``FPGA1_0`` (``ACCELERATOR_FPGA``) 5. ``NUMA0`` (``VCPU``, ``MEMORY_MB``) + ``FPGA1_1`` (``ACCELERATOR_FPGA``) 6. ``NUMA1`` (``VCPU``, ``MEMORY_MB``) + ``FPGA0_0`` (``ACCELERATOR_FPGA``) The request groups specified in the ``same_subtree`` need not have a resources$S. For example, to request 2 FPGAs with different traits on the same NUMA node, a request may look like:: GET /allocation_candidates ?required_NUMA=HW_NUMA_ROOT &resources_ACCEL1=ACCELERATOR_FPGA:1 &required_ACCEL1=CUSTOM_TYPE1 &resources_ACCEL2=ACCELERATOR_FPGA:1 &required_ACCEL2=CUSTOM_TYPE2 &group_policy=none &same_subtree=_NUMA,_ACCEL1,_ACCEL2 This will produce candidates including: 1. ``FPGA1_0`` (``ACCELERATOR_FPGA``) + ``FPGA1_1`` (``ACCELERATOR_FPGA``) + ``NUMA1`` but not: 2. ``FPGA0_0`` (``ACCELERATOR_FPGA``) + ``FPGA1_1`` (``ACCELERATOR_FPGA``) + ``NUMA0`` 3. ``FPGA0_0`` (``ACCELERATOR_FPGA``) + ``FPGA1_1`` (``ACCELERATOR_FPGA``) + ``NUMA1`` 4. ``FPGA1_0`` (``ACCELERATOR_FPGA``) + ``FPGA1_1`` (``ACCELERATOR_FPGA``) + ``NUMA0`` The resource provider that satisfies the resourceless request group ``?required_NUMA=HW_NUMA_ROOT``, ``NUMA1`` in the first example above, will not be in the ``allocation_request`` field of the response, but is shown in the ``mappings`` field. The ``same_subtree`` query parameter can be repeated and each repeat group is treated independently. .. _`Nested Resource Providers`: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/nested-resource-providers.html .. _`POST /resource_providers`: https://docs.openstack.org/api-ref/placement/#create-resource-provider .. _`PUT /resource_providers/{uuid}`: https://docs.openstack.org/api-ref/placement/#update-resource-provider .. _`GET /allocation_candidates`: https://docs.openstack.org/api-ref/placement/#list-allocation-candidates .. _`Filtering by Aggregate Membership`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/alloc-candidates-member-of.html .. _`The Traits API`: http://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/resource-provider-traits.html .. _`Request Traits`: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/request-traits-in-nova.html .. _`Forbidden Traits`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/placement-forbidden-traits.html .. _`Granular Resource Request`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html .. _`Filter Allocation Candidates by Provider Tree`: https://specs.openstack.org/openstack/nova-specs/specs/stein/implemented/alloc-candidates-in-tree.html .. _`Support subtree filter`: https://review.opendev.org/#/c/595236/ .. _`root_required`: https://docs.openstack.org/placement/latest/specs/train/approved/2005575-nested-magic-1.html#root-required .. _`same_subtree`: https://docs.openstack.org/placement/latest/specs/train/approved/2005575-nested-magic-1.html#same-subtree ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2407777 openstack_placement-13.0.0/doc/test/0000775000175000017500000000000000000000000017361 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/doc/test/redirect-tests.txt0000664000175000017500000000131700000000000023065 0ustar00zuulzuul00000000000000/placement/latest/specs/train/approved/2005297-negative-aggregate-membership.html 301 /placement/latest/specs/train/implemented/2005297-negative-aggregate-membership.html /placement/latest/specs/train/approved/placement-resource-provider-request-group-mapping-in-allocation-candidates.html 301 /placement/latest/specs/train/implemented/placement-resource-provider-request-group-mapping-in-allocation-candidates.html /placement/latest/specs/train/approved/2005575-nested-magic-1.html 301 /placement/latest/specs/train/implemented/2005575-nested-magic-1.html /placement/latest/usage/index.html 301 /placement/latest/user/index.html /placement/latest/usage/provider-tree.html 301 /placement/latest/user/provider-tree.html ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2127779 openstack_placement-13.0.0/etc/0000775000175000017500000000000000000000000016410 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2407777 openstack_placement-13.0.0/etc/placement/0000775000175000017500000000000000000000000020360 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/etc/placement/README.rst0000664000175000017500000000143000000000000022045 0ustar00zuulzuul00000000000000Sample policy and config files ============================== This directory contains sample ``placement.conf`` and ``policy.yaml`` files. Sample Config ------------- To generate the sample ``placement.conf`` file, run the following command from the top level of the placement directory:: tox -e genconfig For a pre-generated example of the latest ``placement.conf``, see: https://docs.openstack.org/placement/latest/configuration/sample-config.html Sample Policy ------------- To generate the sample ``policy.yaml`` file, run the following command from the top level of the placement directory:: tox -e genpolicy For a pre-generated example of the latest placement ``policy.yaml``, see: https://docs.openstack.org/placement/latest/configuration/sample-policy.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/etc/placement/config-generator.conf0000664000175000017500000000067500000000000024470 0ustar00zuulzuul00000000000000[DEFAULT] output_file = etc/placement/placement.conf.sample wrap_width = 80 namespace = placement.conf namespace = keystonemiddleware.auth_token namespace = oslo.log namespace = oslo.middleware.cors namespace = oslo.middleware.http_proxy_to_wsgi namespace = oslo.policy namespace = osprofiler # FIXME(mriedem): There are likely other missing 3rd party oslo library # options that should show up in the placement.conf docs, like oslo.concurrency ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/etc/placement/policy-generator.conf0000664000175000017500000000011700000000000024511 0ustar00zuulzuul00000000000000[DEFAULT] output_file = etc/placement/policy.yaml.sample namespace = placement ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2407777 openstack_placement-13.0.0/gate/0000775000175000017500000000000000000000000016555 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/gate/README0000664000175000017500000000125000000000000017433 0ustar00zuulzuul00000000000000This directory contains files used by the OpenStack infra test system. They are really only relevant within the scope of the OpenStack infra system and are not expected to be useful to anyone else. These files are a mixture of: * Hooks and other scripts to be used by the OpenStack infra test system. These scripts may be called by certain jobs at important times to do extra testing, setup, run services, etc. * "gabbits" are test files to be used with some of the jobs described in .zuul.yaml and playbooks. When changes are made in the gabbits or playbooks it is quite likely that queries in the playbooks or the assertions in the gabbits will need to be updated. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2407777 openstack_placement-13.0.0/gate/gabbits/0000775000175000017500000000000000000000000020170 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/gate/gabbits/nested-perfload.yaml0000664000175000017500000001361100000000000024132 0ustar00zuulzuul00000000000000# This is a nested topology to exercise a large section of the nested provider # related code in placement. The structure here is based on some of the # structures in the NUMANetworkFixture in # placement.tests.functional.fixtures.gabbits. This version initially leaves # out many of the resource providers created there, with the intent that we can # add more as the need presents itself. # # For the time being only one compute node is created, with two numa nodes, # each with two devices attached, either two FPGA or an FPGA and PGPU. # # Here's a graphical representation of what is created. Please keep this up to # date as changes are made: # # +-----------------------------+ # | compute node (cn1) | # | COMPUTE_VOLUME_MULTI_ATTACH | # | DISK_GB: 20480 | # +---------------+-------------+ # | # +--------------------+ # | | # +---------+--------+ +---------+--------+ # | numa0 | | numa1 | # | HW_NUMA_ROOT | | HW_NUMA_ROOT | # | | | CUSTOM_FOO | # | VCPU: 4 (2 res.) | | VCPU: 4 | # | MEMORY_MB: 2048 | | MEMORY_MB: 2048 | # | min_unit: 512 | | min_unit: 256 | # | step_size: 256 | | max_unit: 1024 | # +---+----------+---+ +---+----------+---+ # | | | | # +---+---+ +---+---+ +---+---+ +---+---+ # |fpga0 | |pgpu0 | |fpga1_0| |fpga1_1| # |FPGA:1 | |VGPU:8 | |FPGA:1 | |FPGA:1 | # +-------+ +-------+ +-------+ +-------+ defaults: request_headers: accept: application/json content-type: application/json openstack-api-version: placement latest x-auth-token: $ENVIRON['TOKEN'] tests: - name: create FOO trait PUT: /traits/CUSTOM_FOO status: 201 || 204 - name: create cn1 POST: /resource_providers data: uuid: $ENVIRON['CN1_UUID'] name: $ENVIRON['CN1_UUID'] status: 200 - name: set cn1 inventory PUT: /resource_providers/$ENVIRON['CN1_UUID']/inventories data: resource_provider_generation: 0 inventories: DISK_GB: total: 20480 - name: set compute node traits PUT: /resource_providers/$ENVIRON['CN1_UUID']/traits data: resource_provider_generation: 1 traits: - COMPUTE_VOLUME_MULTI_ATTACH - name: create numa 0 POST: /resource_providers data: uuid: $ENVIRON['N0_UUID'] name: numa 0-$ENVIRON['N0_UUID'] parent_provider_uuid: $ENVIRON['CN1_UUID'] - name: set numa 0 inventory PUT: /resource_providers/$ENVIRON['N0_UUID']/inventories data: resource_provider_generation: 0 inventories: VCPU: total: 4 reserved: 2 MEMORY_MB: total: 2048 min_unit: 512 step_size: 256 - name: set numa 0 traits PUT: /resource_providers/$ENVIRON['N0_UUID']/traits data: resource_provider_generation: 1 traits: - HW_NUMA_ROOT - name: create fpga0_0 POST: /resource_providers data: uuid: $ENVIRON['FPGA0_0_UUID'] name: fpga0-0-$ENVIRON['FPGA0_0_UUID'] parent_provider_uuid: $ENVIRON['N0_UUID'] - name: set fpga0_0 inventory PUT: /resource_providers/$ENVIRON['FPGA0_0_UUID']/inventories data: resource_provider_generation: 0 inventories: FPGA: total: 1 - name: create pgpu0_0 POST: /resource_providers data: uuid: $ENVIRON['PGPU0_0_UUID'] name: pgpu0-0-$ENVIRON['PGPU0_0_UUID'] parent_provider_uuid: $ENVIRON['N0_UUID'] - name: set pgpu0_0 inventory PUT: /resource_providers/$ENVIRON['PGPU0_0_UUID']/inventories data: resource_provider_generation: 0 inventories: VGPU: total: 8 - name: create numa 1 POST: /resource_providers data: uuid: $ENVIRON['N1_UUID'] name: numa 1-$ENVIRON['N1_UUID'] parent_provider_uuid: $ENVIRON['CN1_UUID'] - name: set numa 1 inventory PUT: /resource_providers/$ENVIRON['N1_UUID']/inventories data: resource_provider_generation: 0 inventories: VCPU: total: 4 MEMORY_MB: total: 2048 min_unit: 256 max_unit: 1024 - name: set numa 1 traits PUT: /resource_providers/$ENVIRON['N1_UUID']/traits data: resource_provider_generation: 1 traits: - HW_NUMA_ROOT - CUSTOM_FOO - name: create fpga1_0 POST: /resource_providers data: uuid: $ENVIRON['FPGA1_0_UUID'] name: fpga1-0-$ENVIRON['FPGA1_0_UUID'] parent_provider_uuid: $ENVIRON['N1_UUID'] - name: set fpga1_0 inventory PUT: /resource_providers/$ENVIRON['FPGA1_0_UUID']/inventories data: resource_provider_generation: 0 inventories: FPGA: total: 1 - name: create fpga1_1 POST: /resource_providers data: uuid: $ENVIRON['FPGA1_1_UUID'] name: fpga1-1-$ENVIRON['FPGA1_1_UUID'] parent_provider_uuid: $ENVIRON['N1_UUID'] - name: set fpga1_1 inventory PUT: /resource_providers/$ENVIRON['FPGA1_1_UUID']/inventories data: resource_provider_generation: 0 inventories: FPGA: total: 1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/gate/perfload-nested-loader.sh0000775000175000017500000000132300000000000023433 0ustar00zuulzuul00000000000000#!/bin/bash set -a HOST=$1 GABBIT=$2 # By default the placement server is set up with noauth2 authentication # handling. If that is changed to keystone, a $TOKEN can be generated in # the calling environment and used instead of the default 'admin'. TOKEN=${TOKEN:-admin} # These are the dynamic/unique values for individual resource providers # that need to be set for each run a gabbi file. Values that are the same # for all the resource providers (for example, traits and inventory) should # be set in $GABBIT. CN1_UUID=$(uuidgen) N0_UUID=$(uuidgen) N1_UUID=$(uuidgen) FPGA0_0_UUID=$(uuidgen) FPGA1_0_UUID=$(uuidgen) FPGA1_1_UUID=$(uuidgen) PGPU0_0_UUID=$(uuidgen) # Run gabbi silently. gabbi-run -q $HOST -- $GABBIT ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/gate/perfload-nested-runner.sh0000775000175000017500000001101100000000000023471 0ustar00zuulzuul00000000000000#!/bin/bash -x WORK_DIR=$1 PLACEMENT_URL="http://127.0.0.1:8000" LOG=placement-perf.txt LOG_DEST=${WORK_DIR}/logs # The gabbit used to create one nested provider tree. It takes # inputs from LOADER to create a unique tree. GABBIT=gate/gabbits/nested-perfload.yaml LOADER=gate/perfload-nested-loader.sh # The query to be used to get a list of allocation candidates. If # $GABBIT is changed, this may need to change. TRAIT="COMPUTE_VOLUME_MULTI_ATTACH" TRAIT1="CUSTOM_FOO" PLACEMENT_QUERY="resources=DISK_GB:10&required=${TRAIT}&resources_COMPUTE=VCPU:1,MEMORY_MB:256&required_COMPUTE=${TRAIT1}&resources_FPGA=FPGA:1&group_policy=none&same_subtree=_COMPUTE,_FPGA" # Number of nested trees to create. ITERATIONS=1000 # Number of times to write allocations and then time again. ALLOCATIONS_TO_WRITE=10 # Apache Benchmark Concurrency AB_CONCURRENT=10 # Apache Benchmark Total Requests AB_COUNT=500 # The number of providers in each nested tree. This will need to # change whenever the resource provider topology created in $GABBIT # is changed. PROVIDER_TOPOLOGY_COUNT=7 # Expected total number of providers, used to check that creation # was a success. TOTAL_PROVIDER_COUNT=$((ITERATIONS * PROVIDER_TOPOLOGY_COUNT)) trap "sudo cp -p $LOG $LOG_DEST" EXIT function time_candidates { ( echo "##### TIMING GET /allocation_candidates?${PLACEMENT_QUERY} twice" time curl -s -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}" > /dev/null time curl -s -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}" > /dev/null ) 2>&1 | tee -a $LOG } function ab_bench { ( echo "#### Running apache benchmark" ab -c $AB_CONCURRENT -n $AB_COUNT -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}" ) 2>&1 | tee -a $LOG } function write_allocation { # Take the first allocation request and send it back as a well-formed allocation curl -s -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}&limit=5" \ | jq --arg proj $(uuidgen) --arg user $(uuidgen) '.allocation_requests[0] + {consumer_generation: null, project_id: $proj, user_id: $user, consumer_type: "TEST"}' \ | curl -f -s -S -H 'x-auth-token: admin' -H 'content-type: application/json' -H 'openstack-api-version: placement latest' \ -X PUT -d @- "${PLACEMENT_URL}/allocations/$(uuidgen)" # curl -f will fail silently on server errors and return code 22 # When used with -s, --silent, -S makes curl show an error message if it fails # If we failed to write an allocation, skip measurements and log a message rc=$? if [[ $rc -eq 22 ]]; then echo "Failed to write allocation due to a server error. See logs/placement-api.log for additional detail." exit 1 elif [[ $rc -ne 0 ]]; then echo "Failed to write allocation, curl returned code: $rc. See job-output.txt for additional detail." exit 1 fi } function load_candidates { time_candidates for iter in $(seq 1 $ALLOCATIONS_TO_WRITE); do echo "##### Writing allocation ${iter}" | tee -a $LOG write_allocation time_candidates done } function check_placement { local rp_count local code code=0 python3 -m venv .perfload . .perfload/bin/activate # install gabbi pip install gabbi # Create $TOTAL_PROVIDER_COUNT nested resource provider trees, # each tree having $PROVIDER_TOPOLOGY_COUNT resource providers. # LOADER is called $ITERATIONS times in parallel using 50% of # the number of processors on the host. echo "##### Creating $TOTAL_PROVIDER_COUNT providers" | tee -a $LOG seq 1 $ITERATIONS | parallel -P 50% $LOADER $PLACEMENT_URL $GABBIT set +x rp_count=$(curl -H 'x-auth-token: admin' ${PLACEMENT_URL}/resource_providers |json_pp|grep -c '"name"') # If we failed to create the required number of rps, skip measurements and # log a message. if [[ $rp_count -ge $TOTAL_PROVIDER_COUNT ]]; then load_candidates ab_bench else ( echo "Unable to create expected number of resource providers. Expected: ${COUNT}, Got: $rp_count" echo "See job-output.txt.gz and logs/placement-api.log for additional detail." ) | tee -a $LOG code=1 fi set -x deactivate exit $code } check_placement ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/gate/perfload-runner.sh0000775000175000017500000001205700000000000022224 0ustar00zuulzuul00000000000000#!/bin/bash -x WORK_DIR=$1 # Do some performance related information gathering for placement. EXPLANATION=" This output combines output from placeload with timing information gathered via curl. The placeload output is the current maximum microversion of placement followed by an encoded representation of what it has done. Lowercase 'r', 'i', 'a', and 't' indicate successful creation of a resource provider and setting inventory, aggregates, and traits on that resource provider. If there are upper case versions of any of those letters, a failure happened for a single request. The letter will be followed by the HTTP status code and the resource provider uuid. These can be used to find the relevant entry in logs/placement-api.log. Note that placeload does not exit with an error code when this happens. It merely reports and moves on. Under correct circumstances the right output is a long string of 4000 characters containing 'r', 'i', 'a', 't' in random order (because async). After that are three aggregate uuids, timing information for the placeload run, and then timing information for two identical curl requests for allocation candidates. If no timed requests are present it means that the expected number of resource providers were not created. At this time, only resource providers are counted, not whether they have the correct inventory, aggregates, or traits. " # This aggregate uuid is a static value in placeload. AGGREGATE="14a5c8a3-5a99-4e8f-88be-00d85fcb1c17" TRAIT="HW_CPU_X86_AVX2" PLACEMENT_QUERY="resources=VCPU:1,DISK_GB:10,MEMORY_MB:256&member_of=${AGGREGATE}&required=${TRAIT}" PLACEMENT_URL="http://127.0.0.1:8000" LOG=placement-perf.txt LOG_DEST=${WORK_DIR}/logs COUNT=1000 # Apache Benchmark Concurrency AB_CONCURRENT=10 # Apache Benchmark Total Requests AB_COUNT=500 trap "sudo cp -p $LOG $LOG_DEST" EXIT function time_candidates { ( echo "##### TIMING GET /allocation_candidates?${PLACEMENT_QUERY} twice" time curl -s -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}" > /dev/null time curl -s -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}" > /dev/null ) 2>&1 | tee -a $LOG } function ab_bench { ( echo "#### Running apache benchmark" ab -c $AB_CONCURRENT -n $AB_COUNT -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}" ) 2>&1 | tee -a $LOG } function write_allocation { # Take the first allocation request and send it back as a well-formed allocation curl -s -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}&limit=5" \ | jq --arg proj $(uuidgen) --arg user $(uuidgen) '.allocation_requests[0] + {consumer_generation: null, project_id: $proj, user_id: $user, consumer_type: "TEST"}' \ | curl -f -s -S -H 'x-auth-token: admin' -H 'content-type: application/json' -H 'openstack-api-version: placement latest' \ -X PUT -d @- "${PLACEMENT_URL}/allocations/$(uuidgen)" rc=$? # curl -f will fail silently on server errors and return code 22 # When used with -s, --silent, -S makes curl show an error message if it fails # If we failed to write an allocation, skip measurements and log a message if [[ $rc -eq 22 ]]; then echo "Failed to write allocation due to a server error. See logs/placement-api.log for additional detail." exit 1 elif [[ $rc -ne 0 ]]; then echo "Failed to write allocation, curl returned code: $rc. See job-output.txt for additional detail." exit 1 fi } function load_candidates { time_candidates for iter in {1..99}; do echo "##### Writing allocation ${iter}" | tee -a $LOG write_allocation time_candidates done } function check_placement { local rp_count local code code=0 python3 -m venv .placeload . .placeload/bin/activate # install placeload pip install 'placeload==0.3.0' set +x # load with placeload ( echo "$EXPLANATION" # preheat the aggregates to avoid https://bugs.launchpad.net/nova/+bug/1804453 placeload $PLACEMENT_URL 10 echo "##### TIMING placeload creating $COUNT resource providers with inventory, aggregates and traits." time placeload $PLACEMENT_URL $COUNT ) 2>&1 | tee -a $LOG rp_count=$(curl -H 'x-auth-token: admin' ${PLACEMENT_URL}/resource_providers |json_pp|grep -c '"name"') # If we failed to create the required number of rps, skip measurements and # log a message. if [[ $rp_count -ge $COUNT ]]; then load_candidates ab_bench else ( echo "Unable to create expected number of resource providers. Expected: ${COUNT}, Got: $rp_count" echo "See job-output.txt.gz and logs/placement-api.log for additional detail." ) | tee -a $LOG code=1 fi set -x deactivate exit $code } check_placement ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/gate/perfload-server.sh0000775000175000017500000000251000000000000022212 0ustar00zuulzuul00000000000000#!/bin/bash -x WORK_DIR=$1 # create database sudo debconf-set-selections <=3.9 License-File: LICENSE Requires-Dist: pbr>=3.1.1 Requires-Dist: SQLAlchemy>=1.4.0 Requires-Dist: keystonemiddleware>=4.18.0 Requires-Dist: Routes>=2.3.1 Requires-Dist: WebOb>=1.8.2 Requires-Dist: jsonschema>=3.2.0 Requires-Dist: requests>=2.25.0 Requires-Dist: oslo.concurrency>=3.26.0 Requires-Dist: oslo.config>=6.7.0 Requires-Dist: oslo.context>=2.22.0 Requires-Dist: oslo.log>=4.3.0 Requires-Dist: oslo.serialization>=2.25.0 Requires-Dist: oslo.utils>=4.5.0 Requires-Dist: oslo.db>=8.6.0 Requires-Dist: oslo.policy>=4.4.0 Requires-Dist: oslo.middleware>=3.31.0 Requires-Dist: oslo.upgradecheck>=1.3.0 Requires-Dist: os-resource-classes>=1.1.0 Requires-Dist: os-traits>=3.3.0 Requires-Dist: microversion-parse>=0.2.1 If you are viewing this README on GitHub, please be aware that placement development happens on `OpenStack git `_ and `OpenStack gerrit `_. =================== OpenStack Placement =================== .. image:: https://governance.openstack.org/tc/badges/placement.svg :target: https://governance.openstack.org/tc/reference/tags/index.html OpenStack Placement provides an HTTP service for managing, selecting, and claiming providers of classes of inventory representing available resources in a cloud. API --- To learn how to use Placement's API, consult the documentation available online at: - `Placement API Reference `__ For more information on OpenStack APIs, SDKs and CLIs in general, refer to: - `OpenStack for App Developers `__ - `Development resources for OpenStack clouds `__ Operators --------- To learn how to deploy and configure OpenStack Placement, consult the documentation available online at: - `OpenStack Placement `__ In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. If you obtained the software from a 3rd party operating system vendor, it is often wise to use their own bug tracker for reporting problems. In all other cases use the master OpenStack bug tracker, available at: - `Bug Tracker `__ - `File new Bug `__ Developers ---------- For information on how to contribute to Placement, please see the contents of CONTRIBUTING.rst. Further developer focused documentation is available at: - `Official Placement Documentation `__ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591511.0 openstack_placement-13.0.0/openstack_placement.egg-info/SOURCES.txt0000664000175000017500000004762400000000000025247 0ustar00zuulzuul00000000000000.coveragerc .pre-commit-config.yaml .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog LICENSE README.rst bindep.txt requirements.txt setup.cfg setup.py test-requirements.txt tox.ini api-ref/ext/__init__.py api-ref/ext/validator.py api-ref/source/aggregates.inc api-ref/source/allocation_candidates.inc api-ref/source/allocations.inc api-ref/source/conf.py api-ref/source/errors.inc api-ref/source/generations.inc api-ref/source/index.rst api-ref/source/inventories.inc api-ref/source/inventory.inc api-ref/source/parameters.yaml api-ref/source/request-ids.inc api-ref/source/reshaper.inc api-ref/source/resource_class.inc api-ref/source/resource_classes.inc api-ref/source/resource_provider.inc api-ref/source/resource_provider_allocations.inc api-ref/source/resource_provider_traits.inc api-ref/source/resource_provider_usages.inc api-ref/source/resource_providers.inc api-ref/source/root.inc api-ref/source/traits.inc api-ref/source/usages.inc api-ref/source/samples/aggregates/get-aggregates-1.19.json api-ref/source/samples/aggregates/get-aggregates.json api-ref/source/samples/aggregates/update-aggregates-1.19.json api-ref/source/samples/aggregates/update-aggregates-request-1.19.json api-ref/source/samples/aggregates/update-aggregates-request.json api-ref/source/samples/aggregates/update-aggregates.json api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.12.json api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.17.json api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.29.json api-ref/source/samples/allocation_candidates/get-allocation_candidates-1.34.json api-ref/source/samples/allocation_candidates/get-allocation_candidates.json api-ref/source/samples/allocations/get-allocations-1.28.json api-ref/source/samples/allocations/get-allocations-1.38.json api-ref/source/samples/allocations/get-allocations.json api-ref/source/samples/allocations/manage-allocations-request-1.28.json api-ref/source/samples/allocations/manage-allocations-request-1.38.json api-ref/source/samples/allocations/manage-allocations-request.json api-ref/source/samples/allocations/update-allocations-request-1.12.json api-ref/source/samples/allocations/update-allocations-request-1.28.json api-ref/source/samples/allocations/update-allocations-request-1.38.json api-ref/source/samples/allocations/update-allocations-request.json api-ref/source/samples/inventories/get-inventories.json api-ref/source/samples/inventories/get-inventory.json api-ref/source/samples/inventories/update-inventories-request.json api-ref/source/samples/inventories/update-inventories.json api-ref/source/samples/inventories/update-inventory-request.json api-ref/source/samples/inventories/update-inventory.json api-ref/source/samples/reshaper/post-reshaper-1.30.json api-ref/source/samples/reshaper/post-reshaper-1.38.json api-ref/source/samples/resource_classes/create-resource_classes-request.json api-ref/source/samples/resource_classes/get-resource_class.json api-ref/source/samples/resource_classes/get-resource_classes.json api-ref/source/samples/resource_classes/update-resource_class-request.json api-ref/source/samples/resource_classes/update-resource_class.json api-ref/source/samples/resource_provider_allocations/get-resource_provider_allocations.json api-ref/source/samples/resource_provider_traits/get-resource_provider-traits.json api-ref/source/samples/resource_provider_traits/update-resource_provider-traits-request.json api-ref/source/samples/resource_provider_traits/update-resource_provider-traits.json api-ref/source/samples/resource_provider_usages/get-resource_provider_usages.json api-ref/source/samples/resource_providers/create-resource_provider.json api-ref/source/samples/resource_providers/create-resource_providers-request.json api-ref/source/samples/resource_providers/get-resource_provider.json api-ref/source/samples/resource_providers/get-resource_providers.json api-ref/source/samples/resource_providers/update-resource_provider-request.json api-ref/source/samples/resource_providers/update-resource_provider.json api-ref/source/samples/root/get-root.json api-ref/source/samples/traits/get-traits.json api-ref/source/samples/usages/get-usages-1.38.json api-ref/source/samples/usages/get-usages.json doc/README.rst doc/requirements.txt doc/source/conf.py doc/source/index.rst doc/source/placement-api-microversion-history.rst doc/source/_extra/.htaccess doc/source/_static/.placeholder doc/source/admin/index.rst doc/source/admin/upgrade-notes.rst doc/source/cli/index.rst doc/source/cli/placement-manage.rst doc/source/cli/placement-status.rst doc/source/configuration/config.rst doc/source/configuration/index.rst doc/source/configuration/policy.rst doc/source/configuration/sample-config.rst doc/source/configuration/sample-policy.rst doc/source/contributor/api-ref-guideline.rst doc/source/contributor/architecture.rst doc/source/contributor/contributing.rst doc/source/contributor/goals.rst doc/source/contributor/index.rst doc/source/contributor/quick-dev.rst doc/source/contributor/testing.rst doc/source/contributor/vision-reflection.rst doc/source/install/from-pypi.rst doc/source/install/index.rst doc/source/install/install-obs.rst doc/source/install/install-rdo.rst doc/source/install/install-ubuntu.rst doc/source/install/note_configuration_vary_by_distribution.rst doc/source/install/verify.rst doc/source/install/shared/endpoints.rst doc/source/specs/index.rst doc/source/specs/template.rst doc/source/specs/2023.1/approved/policy-defaults-improvement.rst doc/source/specs/train/approved/2005473-support-consumer-types.rst doc/source/specs/train/implemented/2005297-negative-aggregate-membership.rst doc/source/specs/train/implemented/2005575-nested-magic-1.rst doc/source/specs/train/implemented/placement-resource-provider-request-group-mapping-in-allocation-candidates.rst doc/source/specs/xena/implemented/allow-provider-re-parenting.rst doc/source/specs/xena/implemented/support-consumer-types.rst doc/source/specs/yoga/implemented/2005345-placement-mixing-required-traits-with-any-traits.rst doc/source/specs/yoga/implemented/2005346-any-traits-in-allocation_candidates-query.rst doc/source/specs/zed/approved/template.rst doc/source/user/index.rst doc/source/user/provider-tree.rst doc/test/redirect-tests.txt etc/placement/README.rst etc/placement/config-generator.conf etc/placement/policy-generator.conf gate/README gate/perfload-nested-loader.sh gate/perfload-nested-runner.sh gate/perfload-runner.sh gate/perfload-server.sh gate/gabbits/nested-perfload.yaml openstack_placement.egg-info/PKG-INFO openstack_placement.egg-info/SOURCES.txt openstack_placement.egg-info/dependency_links.txt openstack_placement.egg-info/entry_points.txt openstack_placement.egg-info/not-zip-safe openstack_placement.egg-info/pbr.json openstack_placement.egg-info/requires.txt openstack_placement.egg-info/top_level.txt placement/__init__.py placement/attribute_cache.py placement/auth.py placement/context.py placement/db_api.py placement/deploy.py placement/direct.py placement/errors.py placement/exception.py placement/fault_wrap.py placement/handler.py placement/lib.py placement/microversion.py placement/policy.py placement/requestlog.py placement/rest_api_version_history.rst placement/util.py placement/wsgi_wrapper.py placement/cmd/__init__.py placement/cmd/manage.py placement/cmd/status.py placement/conf/__init__.py placement/conf/api.py placement/conf/base.py placement/conf/database.py placement/conf/opts.py placement/conf/paths.py placement/conf/placement.py placement/db/__init__.py placement/db/constants.py placement/db/sqlalchemy/__init__.py placement/db/sqlalchemy/alembic.ini placement/db/sqlalchemy/migration.py placement/db/sqlalchemy/models.py placement/db/sqlalchemy/alembic/env.py placement/db/sqlalchemy/alembic/script.py.mako placement/db/sqlalchemy/alembic/versions/422ece571366_add_consumer_types_table.py placement/db/sqlalchemy/alembic/versions/611cd6dffd7b_block_on_null_root_provider_id.py placement/db/sqlalchemy/alembic/versions/a082b8bb98d0_drop_redundant_indexes_for_unique_.py placement/db/sqlalchemy/alembic/versions/b4ed3a175331_initial.py placement/db/sqlalchemy/alembic/versions/b5c396305c25_block_on_null_consumer.py placement/handlers/__init__.py placement/handlers/aggregate.py placement/handlers/allocation.py placement/handlers/allocation_candidate.py placement/handlers/inventory.py placement/handlers/reshaper.py placement/handlers/resource_class.py placement/handlers/resource_provider.py placement/handlers/root.py placement/handlers/trait.py placement/handlers/usage.py placement/handlers/util.py placement/objects/__init__.py placement/objects/allocation.py placement/objects/allocation_candidate.py placement/objects/consumer.py placement/objects/consumer_type.py placement/objects/inventory.py placement/objects/project.py placement/objects/research_context.py placement/objects/reshaper.py placement/objects/resource_class.py placement/objects/resource_provider.py placement/objects/rp_candidates.py placement/objects/trait.py placement/objects/usage.py placement/objects/user.py placement/policies/__init__.py placement/policies/aggregate.py placement/policies/allocation.py placement/policies/allocation_candidate.py placement/policies/base.py placement/policies/inventory.py placement/policies/reshaper.py placement/policies/resource_class.py placement/policies/resource_provider.py placement/policies/trait.py placement/policies/usage.py placement/schemas/__init__.py placement/schemas/aggregate.py placement/schemas/allocation.py placement/schemas/allocation_candidate.py placement/schemas/common.py placement/schemas/inventory.py placement/schemas/reshaper.py placement/schemas/resource_class.py placement/schemas/resource_provider.py placement/schemas/trait.py placement/schemas/usage.py placement/tests/README.rst placement/tests/__init__.py placement/tests/fixtures.py placement/tests/functional/__init__.py placement/tests/functional/base.py placement/tests/functional/test_allocation.py placement/tests/functional/test_allocation_candidates.py placement/tests/functional/test_api.py placement/tests/functional/test_direct.py placement/tests/functional/test_lib_sync.py placement/tests/functional/test_verify_policy.py placement/tests/functional/cmd/__init__.py placement/tests/functional/cmd/test_status.py placement/tests/functional/db/__init__.py placement/tests/functional/db/test_allocation.py placement/tests/functional/db/test_allocation_candidates.py placement/tests/functional/db/test_attribute_cache.py placement/tests/functional/db/test_base.py placement/tests/functional/db/test_consumer.py placement/tests/functional/db/test_consumer_type.py placement/tests/functional/db/test_migrations.py placement/tests/functional/db/test_project.py placement/tests/functional/db/test_reshape.py placement/tests/functional/db/test_resource_class.py placement/tests/functional/db/test_resource_provider.py placement/tests/functional/db/test_trait.py placement/tests/functional/db/test_usage.py placement/tests/functional/db/test_user.py placement/tests/functional/fixtures/__init__.py placement/tests/functional/fixtures/capture.py placement/tests/functional/fixtures/gabbits.py placement/tests/functional/fixtures/placement.py placement/tests/functional/gabbits/aggregate-legacy-rbac.yaml placement/tests/functional/gabbits/aggregate-policy.yaml placement/tests/functional/gabbits/aggregate-secure-rbac.yaml placement/tests/functional/gabbits/aggregate.yaml placement/tests/functional/gabbits/allocation-bad-class.yaml placement/tests/functional/gabbits/allocation-candidates-any-traits-groups.yaml placement/tests/functional/gabbits/allocation-candidates-any-traits.yaml placement/tests/functional/gabbits/allocation-candidates-bug-1792503.yaml placement/tests/functional/gabbits/allocation-candidates-legacy-rbac.yaml placement/tests/functional/gabbits/allocation-candidates-mappings-numa.yaml placement/tests/functional/gabbits/allocation-candidates-mappings-sharing.yaml placement/tests/functional/gabbits/allocation-candidates-member-of.yaml placement/tests/functional/gabbits/allocation-candidates-policy.yaml placement/tests/functional/gabbits/allocation-candidates-root-required.yaml placement/tests/functional/gabbits/allocation-candidates-secure-rbac.yaml placement/tests/functional/gabbits/allocation-candidates.yaml placement/tests/functional/gabbits/allocations-1-12.yaml placement/tests/functional/gabbits/allocations-1-8.yaml placement/tests/functional/gabbits/allocations-1.28.yaml placement/tests/functional/gabbits/allocations-bug-1714072.yaml placement/tests/functional/gabbits/allocations-bug-1778591.yaml placement/tests/functional/gabbits/allocations-bug-1778743.yaml placement/tests/functional/gabbits/allocations-bug-1779717.yaml placement/tests/functional/gabbits/allocations-legacy-rbac.yaml placement/tests/functional/gabbits/allocations-mappings.yaml placement/tests/functional/gabbits/allocations-policy.yaml placement/tests/functional/gabbits/allocations-post.yaml placement/tests/functional/gabbits/allocations-secure-rbac.yaml placement/tests/functional/gabbits/allocations.yaml placement/tests/functional/gabbits/basic-http.yaml placement/tests/functional/gabbits/bug-1674694.yaml placement/tests/functional/gabbits/confirm-auth.yaml placement/tests/functional/gabbits/consumer-types-1.38.yaml placement/tests/functional/gabbits/consumer-types-bug-story-2009167.yaml placement/tests/functional/gabbits/cors.yaml placement/tests/functional/gabbits/ensure-consumer.yaml placement/tests/functional/gabbits/granular-same-subtree.yaml placement/tests/functional/gabbits/granular.yaml placement/tests/functional/gabbits/inventory-legacy-rbac.yaml placement/tests/functional/gabbits/inventory-policy.yaml placement/tests/functional/gabbits/inventory-secure-rbac.yaml placement/tests/functional/gabbits/inventory.yaml placement/tests/functional/gabbits/microversion-bug-1724065.yaml placement/tests/functional/gabbits/microversion.yaml placement/tests/functional/gabbits/non-cors.yaml placement/tests/functional/gabbits/reshaper-legacy-rbac.yaml placement/tests/functional/gabbits/reshaper-policy.yaml placement/tests/functional/gabbits/reshaper-secure-rbac.yaml placement/tests/functional/gabbits/reshaper.yaml placement/tests/functional/gabbits/resource-class-in-use.yaml placement/tests/functional/gabbits/resource-classes-1-6.yaml placement/tests/functional/gabbits/resource-classes-1-7.yaml placement/tests/functional/gabbits/resource-classes-last-modified.yaml placement/tests/functional/gabbits/resource-classes-legacy-rbac.yaml placement/tests/functional/gabbits/resource-classes-policy.yaml placement/tests/functional/gabbits/resource-classes-secure-rbac.yaml placement/tests/functional/gabbits/resource-classes.yaml placement/tests/functional/gabbits/resource-provider-aggregates.yaml placement/tests/functional/gabbits/resource-provider-any-traits.yaml placement/tests/functional/gabbits/resource-provider-bug-1779818.yaml placement/tests/functional/gabbits/resource-provider-duplication.yaml placement/tests/functional/gabbits/resource-provider-legacy-rbac.yaml placement/tests/functional/gabbits/resource-provider-links.yaml placement/tests/functional/gabbits/resource-provider-policy.yaml placement/tests/functional/gabbits/resource-provider-resources-query.yaml placement/tests/functional/gabbits/resource-provider-secure-rbac.yaml placement/tests/functional/gabbits/resource-provider.yaml placement/tests/functional/gabbits/same-subtree-deep.yaml placement/tests/functional/gabbits/shared-resources.yaml placement/tests/functional/gabbits/traits-legacy-rbac.yaml placement/tests/functional/gabbits/traits-policy.yaml placement/tests/functional/gabbits/traits-secure-rbac.yaml placement/tests/functional/gabbits/traits.yaml placement/tests/functional/gabbits/unicode.yaml placement/tests/functional/gabbits/usage-legacy-rbac.yaml placement/tests/functional/gabbits/usage-policy.yaml placement/tests/functional/gabbits/usage-secure-rbac.yaml placement/tests/functional/gabbits/usage.yaml placement/tests/functional/gabbits/with-allocations.yaml placement/tests/unit/__init__.py placement/tests/unit/base.py placement/tests/unit/policy_fixture.py placement/tests/unit/test_auth.py placement/tests/unit/test_context.py placement/tests/unit/test_db_api.py placement/tests/unit/test_db_conf.py placement/tests/unit/test_deploy.py placement/tests/unit/test_fault_wrap.py placement/tests/unit/test_handler.py placement/tests/unit/test_microversion.py placement/tests/unit/test_policy.py placement/tests/unit/test_requestlog.py placement/tests/unit/test_util.py placement/tests/unit/cmd/__init__.py placement/tests/unit/cmd/test_manage.py placement/tests/unit/handlers/__init__.py placement/tests/unit/handlers/test_aggregate.py placement/tests/unit/handlers/test_resource_provider.py placement/tests/unit/handlers/test_trait.py placement/tests/unit/handlers/test_util.py placement/tests/unit/objects/__init__.py placement/tests/unit/objects/base.py placement/tests/unit/objects/test_allocation.py placement/tests/unit/objects/test_allocation_candidate.py placement/tests/unit/objects/test_inventory.py placement/tests/unit/objects/test_resource_class.py placement/tests/unit/objects/test_resource_provider.py placement/tests/unit/objects/test_rp_candidates.py placement/tests/unit/objects/test_trait.py placement/tests/unit/objects/test_usage.py placement/wsgi/__init__.py placement/wsgi/api.py playbooks/nested-perfload.yaml playbooks/perfload.yaml playbooks/post.yaml releasenotes/notes/add-placment-wsgi-module-ae42938ebe0258cb.yaml releasenotes/notes/alloc-candidates-in-tree-f69b0de5ba33096b.yaml releasenotes/notes/allocation-candidate-mappings-e00cf6deadcee9ab.yaml releasenotes/notes/allocation-candidate-same_subtree-aeed7b2570293dfb.yaml releasenotes/notes/allocation-candidates-root_required-bfe4f96f96a2a5db.yaml releasenotes/notes/allocation_conflict_retry_count-329daae86059f5ec.yaml releasenotes/notes/any-traits-support-d3807c27e5a8865c.yaml releasenotes/notes/bug-1792503-member-of-5c10df94caf3bd08.yaml releasenotes/notes/bug-2070257-allocation-candidates-generation-limit-and-strategy.yaml-e73796898163fb55.yaml releasenotes/notes/consumer_type-857b812aef10381e.yaml releasenotes/notes/create-allocation-empty-mapping-field-f5f97de6df891362.yaml releasenotes/notes/db-auto-sync-e418f3f181958c7c.yaml releasenotes/notes/deprecate-json-formatted-policy-file-dbec7a29325316de.yaml releasenotes/notes/deprecate-placement-policy-file-1777dc2e92d8363c.yaml releasenotes/notes/drop-python-2-aabea7dcdeca7ebf.yaml releasenotes/notes/drop-python-3-6-and-3-7-9db9b12a73106e26.yaml releasenotes/notes/drop-python-3-6-and-3-7-c3d8c440800ed885.yaml releasenotes/notes/drop-python-3-8-4636cf15992db5e7.yaml releasenotes/notes/fix-osprofiler-support-78b34a92c32fd30f.yaml releasenotes/notes/granular-request-suffix-a7fd857eadc16b56.yaml releasenotes/notes/http_proxy_to_wsgi-6c8392d7eaed7c8d.yaml releasenotes/notes/limit-nested-allocation-candidates-0886e569d15ad951.yaml releasenotes/notes/negative-aggregate-membership-1dde3cbe27c69279.yaml releasenotes/notes/placement-status-upgrade-check-3aa412fd6cb1e4bc.yaml releasenotes/notes/policy-defaults-refresh-d903d15cd51ac1a8.yaml releasenotes/notes/rbac-policy-support-94f84c29da81c331.yaml releasenotes/notes/re-parenting-providers-94dcedff45b35bf7.yaml releasenotes/notes/remove-deprecated-placement-policy-cba1414ca626302d.yaml releasenotes/notes/remove-placement-policy-file-config-bb9bb26332413a77.yaml releasenotes/notes/set_root_provider_id-53930a5d1dbd374f.yaml releasenotes/notes/stein-prelude-779b0dbfe65cf9ac.yaml releasenotes/notes/train-prelude-06739452ba2f66d9.yaml releasenotes/notes/train-require-root-provider-ids-60bc374ac354f41e.yaml releasenotes/notes/upgrade-status-check-incomplete-consumers-3362d7db55dd8bdf.yaml releasenotes/source/2023.1.rst releasenotes/source/2023.2.rst releasenotes/source/2024.1.rst releasenotes/source/2024.2.rst releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/stein.rst releasenotes/source/train.rst releasenotes/source/unreleased.rst releasenotes/source/ussuri.rst releasenotes/source/victoria.rst releasenotes/source/wallaby.rst releasenotes/source/xena.rst releasenotes/source/yoga.rst releasenotes/source/zed.rst tools/flake8wrap.sh tools/test-setup.sh././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591511.0 openstack_placement-13.0.0/openstack_placement.egg-info/dependency_links.txt0000664000175000017500000000000100000000000027414 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591511.0 openstack_placement-13.0.0/openstack_placement.egg-info/entry_points.txt0000664000175000017500000000070100000000000026642 0ustar00zuulzuul00000000000000[console_scripts] placement-manage = placement.cmd.manage:main placement-status = placement.cmd.status:main [oslo.config.opts] placement.conf = placement.conf.opts:list_opts [oslo.config.opts.defaults] nova.conf = placement.conf.base:set_lib_defaults [oslo.policy.enforcer] placement = placement.policy:get_enforcer [oslo.policy.policies] placement = placement.policies:list_rules [wsgi_scripts] placement-api = placement.wsgi:init_application ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591511.0 openstack_placement-13.0.0/openstack_placement.egg-info/not-zip-safe0000664000175000017500000000000100000000000025574 0ustar00zuulzuul00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591511.0 openstack_placement-13.0.0/openstack_placement.egg-info/pbr.json0000664000175000017500000000005700000000000025026 0ustar00zuulzuul00000000000000{"git_version": "476a14dc", "is_release": true}././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591511.0 openstack_placement-13.0.0/openstack_placement.egg-info/requires.txt0000664000175000017500000000061500000000000025750 0ustar00zuulzuul00000000000000pbr>=3.1.1 SQLAlchemy>=1.4.0 keystonemiddleware>=4.18.0 Routes>=2.3.1 WebOb>=1.8.2 jsonschema>=3.2.0 requests>=2.25.0 oslo.concurrency>=3.26.0 oslo.config>=6.7.0 oslo.context>=2.22.0 oslo.log>=4.3.0 oslo.serialization>=2.25.0 oslo.utils>=4.5.0 oslo.db>=8.6.0 oslo.policy>=4.4.0 oslo.middleware>=3.31.0 oslo.upgradecheck>=1.3.0 os-resource-classes>=1.1.0 os-traits>=3.3.0 microversion-parse>=0.2.1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591511.0 openstack_placement-13.0.0/openstack_placement.egg-info/top_level.txt0000664000175000017500000000001200000000000026071 0ustar00zuulzuul00000000000000placement ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.244778 openstack_placement-13.0.0/placement/0000775000175000017500000000000000000000000017605 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/__init__.py0000664000175000017500000000000000000000000021704 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/attribute_cache.py0000664000175000017500000002004000000000000023301 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import sqlalchemy as sa from placement.db.sqlalchemy import models from placement import db_api from placement import exception from placement.objects import consumer_type as ct_obj _CONSUMER_TYPE_TBL = models.ConsumerType.__table__ _RC_TBL = models.ResourceClass.__table__ _TRAIT_TBL = models.Trait.__table__ class _AttributeCache(object): """A cache of integer and string lookup values for string-based attributes. Subclasses must define `_table` and `_not_found` members describing the database table which is the authoritative source of data and the exception raised if data for an attribute is not found, respectively. The cache is required to be correct for the extent of any individual API request and be used only for those entities where any change to the underlying data is only making that change and will have no subsequent queries into the cache. For example, when we add a new resource class we do not then list all the resource classes from within the same session. Despite that requirement, any time an entity associated with a cache is created, updated, or deleted `clear()` should be called on the cache. """ _table = None _not_found = None # The cache internally stores either sqlalchemy Row objects or # Attrs namedtuples but Row is compatible with namedtuple interface too. Attrs = collections.namedtuple( "Attrs", ["id", "name", "updated_at", "created_at"] ) def __init__(self, ctx): """Initialize the cache of resource class identifiers. :param ctx: `placement.context.RequestContext` from which we can grab a `SQLAlchemy.Connection` object to use for any DB lookups. """ # Prevent this class being created directly, relevant during # development. assert self._table is not None, "_table must be defined" assert self._not_found is not None, "_not_found must be defined" self._ctx = ctx self.clear() def clear(self): self._id_cache = {} self._str_cache = {} self._all_cache = {} def id_from_string(self, attr_str): """Given a string representation of an attribute -- e.g. "DISK_GB" or "CUSTOM_IRON_SILVER" -- return the integer code for the attribute by doing a DB lookup into the appropriate table; however, the results of these DB lookups are cached since the lookups are so frequent. :param attr_str: The string representation of the attribute to look up a numeric identifier for. :returns Integer identifier for the attribute. :raises An instance of the subclass' _not_found exception if attribute cannot be found in the DB. """ attr_id = self._id_cache.get(attr_str) if attr_id is not None: return attr_id # Otherwise, check the database table self._refresh_from_db(self._ctx) if attr_str in self._id_cache: return self._id_cache[attr_str] raise self._not_found(name=attr_str) def all_from_string(self, attr_str): """Given a string representation of an attribute -- e.g. "DISK_GB" or "CUSTOM_IRON_SILVER" -- return all the attribute info. :param attr_str: The string representation of the attribute for which to look up the object. :returns: namedtuple representing the attribute fields, if the attribute was found in the appropriate database table. :raises An instance of the subclass' _not_found exception if attr_str cannot be found in the DB. """ attrs = self._all_cache.get(attr_str) if attrs is not None: return attrs # Otherwise, check the database table self._refresh_from_db(self._ctx) if attr_str in self._all_cache: return self._all_cache[attr_str] raise self._not_found(name=attr_str) def string_from_id(self, attr_id): """The reverse of the id_from_string() method. Given a supplied numeric identifier for an attribute, we look up the corresponding string representation, via a DB lookup. The results of these DB lookups are cached since the lookups are so frequent. :param attr_id: The numeric representation of the attribute to look up a string identifier for. :returns: String identifier for the attribute. :raises An instances of the subclass' _not_found exception if attr_id cannot be found in the DB. """ attr_str = self._str_cache.get(attr_id) if attr_str is not None: return attr_str # Otherwise, check the database table self._refresh_from_db(self._ctx) if attr_id in self._str_cache: return self._str_cache[attr_id] raise self._not_found(name=attr_id) def get_all(self): """Return an iterator of all the resources in the cache with all their attributes as a namedtuple. In Python3 the return value is a generator. """ if not self._all_cache: self._refresh_from_db(self._ctx) return self._all_cache.values() @db_api.placement_context_manager.reader def _refresh_from_db(self, ctx): """Grabs all resource classes or traits from the respective DB table and populates the supplied cache object's internal integer and string identifier dicts. :param ctx: RequestContext with the the database session. """ table = self._table sel = sa.select( table.c.id, table.c.name, table.c.updated_at, table.c.created_at, ) res = ctx.session.execute(sel).fetchall() self._id_cache = {r[1]: r[0] for r in res} self._str_cache = {r[0]: r[1] for r in res} # Note that r is Row object that is compatible with the namedtuple # interface of the cache self._all_cache = {r[1]: r for r in res} def _add_attribute(self, attr_id, name, created_at, updated_at): """Use this to add values to the cache that are not coming from the database, like defaults. """ self._id_cache[name] = attr_id self._str_cache[attr_id] = name attrs = self.Attrs(attr_id, name, updated_at, created_at) self._all_cache[name] = attrs class ConsumerTypeCache(_AttributeCache): """An _AttributeCache for consumer types.""" _table = _CONSUMER_TYPE_TBL _not_found = exception.ConsumerTypeNotFound @db_api.placement_context_manager.reader def _refresh_from_db(self, ctx): super(ConsumerTypeCache, self)._refresh_from_db(ctx) # The consumer_type_id is nullable and records with a NULL (None) # consumer_type_id are considered as 'unknown'. Also the 'unknown' # consumer_type is not created in the database so we need to manually # populate it in the cache here. self._add_attribute( attr_id=None, name=ct_obj.NULL_CONSUMER_TYPE_ALIAS, # should we synthesize some dates in the past instead? created_at=None, updated_at=None, ) class ResourceClassCache(_AttributeCache): """An _AttributeCache for resource classes.""" _table = _RC_TBL _not_found = exception.ResourceClassNotFound class TraitCache(_AttributeCache): """An _AttributeCache for traits.""" _table = _TRAIT_TBL _not_found = exception.TraitNotFound ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/auth.py0000664000175000017500000000717300000000000021130 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystonemiddleware import auth_token from oslo_log import log as logging from oslo_middleware import request_id import webob.dec import webob.exc from placement import context LOG = logging.getLogger(__name__) class Middleware(object): def __init__(self, application, **kwargs): self.application = application # NOTE(cdent): Only to be used in tests where auth is being faked. This # middleware can be used to mimic keystonemiddleware auth_token middleware, # which is important for building API protection tests without an external # dependency on keystone. class NoAuthMiddleware(Middleware): """Require a token if one isn't present.""" def __init__(self, application): self.application = application @webob.dec.wsgify def __call__(self, req): if req.environ['PATH_INFO'] == '/': return self.application if 'X-Auth-Token' not in req.headers: return webob.exc.HTTPUnauthorized() token = req.headers['X-Auth-Token'] user_id, _sep, project_id = token.partition(':') project_id = project_id or user_id # Real keystone expands and flattens roles to include their implied # roles, e.g. admin implies member and reader, so tests should include # this flattened list also if 'HTTP_X_ROLES' in req.environ.keys(): roles = req.headers['X_ROLES'].split(',') elif user_id == 'admin': roles = ['admin'] else: roles = [] req.headers['X_USER_ID'] = user_id if not req.headers.get('OPENSTACK_SYSTEM_SCOPE'): req.headers['X_TENANT_ID'] = project_id req.headers['X_ROLES'] = ','.join(roles) return self.application class PlacementKeystoneContext(Middleware): """Make a request context from keystone headers.""" @webob.dec.wsgify def __call__(self, req): req_id = req.environ.get(request_id.ENV_REQUEST_ID) ctx = context.RequestContext.from_environ( req.environ, request_id=req_id) if ctx.user_id is None and req.environ['PATH_INFO'] not in ['/', '']: LOG.debug("Neither X_USER_ID nor X_USER found in request") return webob.exc.HTTPUnauthorized() req.environ['placement.context'] = ctx return self.application class PlacementAuthProtocol(auth_token.AuthProtocol): """A wrapper on Keystone auth_token middleware. Does not perform verification of authentication tokens for root in the API. """ def __init__(self, app, conf): self._placement_app = app super(PlacementAuthProtocol, self).__init__(app, conf) def __call__(self, environ, start_response): if environ['PATH_INFO'] in ['/', '']: return self._placement_app(environ, start_response) return super(PlacementAuthProtocol, self).__call__( environ, start_response) def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return PlacementAuthProtocol(app, conf) return auth_filter ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.244778 openstack_placement-13.0.0/placement/cmd/0000775000175000017500000000000000000000000020350 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/cmd/__init__.py0000664000175000017500000000000000000000000022447 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/cmd/manage.py0000664000175000017500000002135100000000000022154 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import prettytable import sys from oslo_config import cfg from oslo_log import log as logging import pbr.version from placement import conf from placement import context from placement.db.sqlalchemy import migration from placement import db_api from placement.objects import consumer as consumer_obj from placement.objects import resource_provider as rp_obj version_info = pbr.version.VersionInfo('openstack-placement') LOG = logging.getLogger(__name__) online_migrations = ( # These functions are called with a DB context and a count, which is the # maximum batch size requested by the user. They must be idempotent. # At most $count records should be migrated. The function must return a # tuple of (found, done). The found value indicates how many # unmigrated/candidate records existed in the database prior to the # migration (either total, or up to the $count limit provided), and a # nonzero found value may tell the user that there is still work to do. # The done value indicates whether or not any records were actually # migrated by the function. Thus if both (found, done) are nonzero, work # was done and some work remains. If found is nonzero and done is zero, # some records are not migratable, but all migrations that can complete # have finished. # Added in Stein rp_obj.set_root_provider_ids, # Added in Stein (copied from migration added to Nova in Rocky) consumer_obj.create_incomplete_consumers, ) class DbCommands(object): def __init__(self, config): self.config = config def db_sync(self): # Let exceptions raise for now, they will go to stderr. migration.upgrade('head') return 0 def db_version(self): print(migration.version()) return 0 def db_stamp(self): migration.stamp(self.config.command.version) return 0 def db_online_data_migrations(self): """Processes online data migration. :returns: 0 if no (further) updates are possible, 1 if the ``--max-count`` option was used and some updates were completed successfully (even if others generated errors), 2 if some updates generated errors and no other migrations were able to take effect in the last batch attempted, or 127 if invalid input is provided. """ max_count = self.config.command.max_count if max_count is not None: try: max_count = int(max_count) except ValueError: max_count = -1 if max_count < 1: print('Must supply a positive value for max_count') return 127 limited = True else: max_count = 50 limited = False print('Running batches of %i until complete' % max_count) ran = None migration_info = collections.OrderedDict() exceptions = False while ran is None or ran != 0: migrations, exceptions = self._run_online_migration(max_count) ran = 0 # For each batch of migration method results, build the cumulative # set of results. for name in migrations: migration_info.setdefault(name, (0, 0)) migration_info[name] = ( migration_info[name][0] + migrations[name][0], migration_info[name][1] + migrations[name][1], ) ran += migrations[name][1] if limited: break t = prettytable.PrettyTable( ['Migration', 'Total Found', 'Completed']) for name, info in migration_info.items(): t.add_row([name, info[0], info[1]]) print(t) # NOTE(tetsuro): In "limited" case, if some update has been "ran", # exceptions are not considered fatal because work may still remain # to be done, and that work may resolve dependencies for the failing # migrations. if exceptions and not (limited and ran): print("Some migrations failed unexpectedly. Check log for " "details.") return 2 # TODO(mriedem): Potentially add another return code for # "there are more migrations, but not completable right now" return ran and 1 or 0 def _run_online_migration(self, max_count): ctxt = context.RequestContext(config=self.config) ran = 0 exceptions = False migrations = collections.OrderedDict() for migration_meth in online_migrations: count = max_count - ran try: found, done = migration_meth(ctxt, count) except Exception: msg = ("Error attempting to run %(method)s" % dict( method=migration_meth)) print(msg) LOG.exception(msg) exceptions = True found = done = 0 name = migration_meth.__name__ if found: print('%(total)i rows matched query %(meth)s, %(done)i ' 'migrated' % {'total': found, 'meth': name, 'done': done}) # This is the per-migration method result for this batch, and # _run_online_migration will either continue on to the next # migration, or stop if up to this point we've processed max_count # of records across all migration methods. migrations[name] = found, done ran += done if ran >= max_count: break return migrations, exceptions def add_db_command_parsers(subparsers, config): command_object = DbCommands(config) # If we set False here, we avoid having an exit during the parse # args part of CONF processing and we can thus print out meaningful # help text. subparsers.required = False parser = subparsers.add_parser('db') # Avoid https://bugs.python.org/issue9351 with cpython < 2.7.9 parser.set_defaults(func=parser.print_help) db_parser = parser.add_subparsers(description='database commands') help = 'Sync the database to the current version.' sync_parser = db_parser.add_parser('sync', help=help, description=help) sync_parser.set_defaults(func=command_object.db_sync) help = 'Report the current database version.' version_parser = db_parser.add_parser( 'version', help=help, description=help) version_parser.set_defaults(func=command_object.db_version) help = 'Stamp the revision table with the given version.' stamp_parser = db_parser.add_parser('stamp', help=help, description=help) stamp_parser.add_argument('version', help='the version to stamp') stamp_parser.set_defaults(func=command_object.db_stamp) help = 'Run the online data migrations.' online_dm_parser = db_parser.add_parser( 'online_data_migrations', help=help, description=help) online_dm_parser.add_argument( '--max-count', metavar='', help='Maximum number of objects to consider') online_dm_parser.set_defaults( func=command_object.db_online_data_migrations) def setup_commands(config): # This is a separate method because it facilitates unit testing. # Use an additional SubCommandOpt and parser for each new sub command. add_db_cmd_parsers = functools.partial( add_db_command_parsers, config=config) command_opt = cfg.SubCommandOpt( 'db', dest='command', title='Command', help='Available DB commands', handler=add_db_cmd_parsers) return [command_opt] def main(): config = cfg.ConfigOpts() conf.register_opts(config) command_opts = setup_commands(config) config.register_cli_opts(command_opts) config(sys.argv[1:], project='placement', version=version_info.version_string(), default_config_files=None) db_api.configure(config) try: func = config.command.func return_code = func() # If return_code ends up None we assume 0. sys.exit(return_code or 0) except cfg.NoSuchOptError: config.print_help() sys.exit(1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/cmd/status.py0000664000175000017500000001451200000000000022250 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy as sa import sys from oslo_config import cfg from oslo_upgradecheck import common_checks from oslo_upgradecheck import upgradecheck from placement import conf from placement import context from placement.db.sqlalchemy import models from placement import db_api class Checks(upgradecheck.UpgradeCommands): """Checks for the ``placement-status upgrade check`` command. Various upgrade checks should be added as separate methods in this class and added to _upgrade_checks tuple. """ def __init__(self, config): self.config = config self.ctxt = context.RequestContext(config=config) @db_api.placement_context_manager.reader def _check_missing_root_ids(self, ctxt): exists = sa.exists().where( models.ResourceProvider.root_provider_id == sa.null()) ret = ctxt.session.query(exists).scalar() return ret def _check_root_provider_ids(self): """Starting in Queens with the 1.28 microversion, resource_providers table has the root_provider_id column. Older resource_providers with no root provider id records will be online migrated when the "placement-manage db online_data_migrations" command is run during an upgrade. This status check emits a failure if there are missing root provider ids to remind operators to perform the data migration. """ if self._check_missing_root_ids(self.ctxt): return upgradecheck.Result( upgradecheck.Code.FAILURE, details='There is at least one resource provider table ' 'record which misses its root provider id. ' 'Run the "placement-manage db ' 'online_data_migrations" command.') return upgradecheck.Result(upgradecheck.Code.SUCCESS) @db_api.placement_context_manager.reader def _count_missing_consumers(self, ctxt): allocation = models.Allocation.__table__ consumer = models.Consumer.__table__ return ctxt.session.execute( sa.select( sa.func.count(sa.distinct(allocation.c.consumer_id)) ).select_from( allocation.outerjoin( consumer, allocation.c.consumer_id == consumer.c.uuid, ) ).where( consumer.c.id.is_(None) ) ).fetchone()[0] def _check_incomplete_consumers(self): """Allocations created with microversion<1.8 prior to Rocky will not have an associated consumers table record. Starting in Rocky with the 1.28 microversion, consumer generations were added to avoid multiple processes overwriting allocations. Older allocations with incomplete consumer records will be online migrated when accessed via the REST API or when the "placement-manage db online_data_migrations" command is run during an upgrade. This status check emits a warning if there are incomplete consumers to remind operators to perform the data migration. Note that normally we would not add an upgrade status check to simply mirror an online data migration since online data migrations should be part of deploying/upgrading placement automation. However, with placement being freshly extracted from nova, this check serves as a friendly reminder and because the data migration will eventually be removed from nova along with the rest of the placement code. """ missing_consumer_count = self._count_missing_consumers(self.ctxt) if missing_consumer_count: # We found missing consumers for existing allocations so return # a warning and tell the user to run the online data migrations. return upgradecheck.Result( upgradecheck.Code.WARNING, details='There are %s incomplete consumers table records ' 'for existing allocations. Run the ' '"placement-manage db online_data_migrations" ' 'command.' % missing_consumer_count) # No missing consumers (or no allocations [fresh install?]) so it's OK. return upgradecheck.Result(upgradecheck.Code.SUCCESS) def _check_policy_json(self): """A wrapper passing a proper config object when calling the generic policy json check. """ return common_checks.check_policy_json(self, self.config) # The format of the check functions is to return an # oslo_upgradecheck.upgradecheck.Result # object with the appropriate # oslo_upgradecheck.upgradecheck.Code and details set. # If the check hits warnings or failures then those should be stored # in the returned Result's "details" attribute. The # summary will be rolled up at the end of the check() method. _upgrade_checks = ( ('Missing Root Provider IDs', _check_root_provider_ids), ('Incomplete Consumers', _check_incomplete_consumers), ("Policy File JSON to YAML Migration", _check_policy_json), ) def main(): # Set up the configuration to configure the database. config = cfg.ConfigOpts() conf.register_opts(config) # Register cli opts before parsing args. upgradecheck.register_cli_options(config, Checks(config)) # A slice of sys.argv is provided to pass the command line # arguments for processing, without the name of the calling # script ('placement-status'). If we were using # upgradecheck.main() directly, it would do it for us, but # we do not because of the need to configure the database # first. config(args=sys.argv[1:], project='placement') db_api.configure(config) return upgradecheck.run(config) if __name__ == '__main__': sys.exit(main()) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.244778 openstack_placement-13.0.0/placement/conf/0000775000175000017500000000000000000000000020532 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/conf/__init__.py0000664000175000017500000000315700000000000022651 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_middleware import cors from oslo_middleware import http_proxy_to_wsgi from oslo_policy import opts as policy_opts from placement.conf import api from placement.conf import base from placement.conf import database from placement.conf import paths from placement.conf import placement # To avoid global config, we require an existing ConfigOpts to be passed # to register_opts. Then the caller can have some assurance that the # config they are using will maintain some independence. def register_opts(conf): api.register_opts(conf) base.register_opts(conf) database.register_opts(conf) paths.register_opts(conf) placement.register_opts(conf) logging.register_options(conf) policy_opts.set_defaults(conf) # The oslo.middleware does not present a register_opts method, instead # it shares a list of available opts. conf.register_opts(cors.CORS_OPTS, 'cors') conf.register_opts(http_proxy_to_wsgi.OPTS, 'oslo_middleware') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/conf/api.py0000664000175000017500000000257400000000000021665 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg api_group = cfg.OptGroup( 'api', title='API options', help=""" Options under this group are used to define Placement API. """) api_opts = [ cfg.StrOpt( "auth_strategy", default="keystone", choices=("keystone", "noauth2"), deprecated_group="DEFAULT", help=""" This determines the strategy to use for authentication: keystone or noauth2. 'noauth2' is designed for testing only, as it does no actual credential checking. 'noauth2' provides administrative credentials only if 'admin' is specified as the username. """), ] def register_opts(conf): conf.register_group(api_group) conf.register_opts(api_opts, group=api_group) def list_opts(): return {api_group: api_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/conf/base.py0000664000175000017500000000302400000000000022015 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_policy import opts as policy_opts base_options = [ cfg.StrOpt( 'tempdir', help='Explicitly specify the temporary working directory.'), ] def set_lib_defaults(): """Update default value for configuration options from other namespace. Example, oslo lib config options. This is needed for config generator tool to pick these default value changes. https://docs.openstack.org/oslo.config/latest/cli/ generator.html#modifying-defaults-from-other-namespaces """ # Update default value of oslo.policy policy_file config option. policy_opts.set_defaults(cfg.CONF, 'policy.yaml') def register_opts(conf): conf.register_opts(base_options) def list_opts(): return {'DEFAULT': base_options} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/conf/database.py0000664000175000017500000001037300000000000022654 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_db import options as oslo_db_options _ENRICHED = False def enrich_help_text(alt_db_opts): def get_db_opts(): for group_name, db_opts in oslo_db_options.list_opts(): if group_name == 'database': return db_opts return [] for db_opt in get_db_opts(): for alt_db_opt in alt_db_opts: if alt_db_opt.name == db_opt.name: # NOTE(markus_z): We can append alternative DB specific help # texts here if needed. alt_db_opt.help = db_opt.help + alt_db_opt.help # NOTE(markus_z): We cannot simply do: # conf.register_opts(oslo_db_options.database_opts, 'placement_database') # If we reuse a db config option for two different groups ("placement_database" # and "database") and deprecate or rename a config option in one of these # groups, "oslo.config" cannot correctly determine which one to update. # That's why we copied & pasted these config options for the # "placement_database" group here. See nova commit ba407e3 ("Add support # for multiple database engines") for more details. # TODO(cdent): Consider our future options of using 'database' instead of # 'placement_database' for the group. This is already loose in the wild, # explicit, and safe if there will ever be more than one database, so may # be good to leave it. placement_db_group = cfg.OptGroup('placement_database', title='Placement API database options', help=""" The *Placement API Database* is a the database used with the placement service. If the connection option is not set, the placement service will not start. """) placement_db_opts = [ cfg.StrOpt( 'connection', help='', required=True, secret=True), cfg.StrOpt( 'connection_parameters', default='', help=''), cfg.BoolOpt( 'sqlite_synchronous', default=True, help=''), cfg.StrOpt( 'slave_connection', secret=True, help=''), cfg.StrOpt( 'mysql_sql_mode', default='TRADITIONAL', help=''), cfg.IntOpt( 'connection_recycle_time', default=3600, help=''), cfg.IntOpt( 'max_pool_size', help=''), cfg.IntOpt( 'max_retries', default=10, help=''), cfg.IntOpt( 'retry_interval', default=10, help=''), cfg.IntOpt( 'max_overflow', help=''), cfg.IntOpt( 'connection_debug', default=0, help=''), cfg.BoolOpt( 'connection_trace', default=False, help=''), cfg.IntOpt( 'pool_timeout', help=''), cfg.BoolOpt( 'sync_on_startup', default=False, help='If True, database schema migrations will be attempted when the' ' web service starts.'), ] def register_opts(conf): conf.register_opts(placement_db_opts, group=placement_db_group) def list_opts(): # NOTE(markus_z): 2016-04-04: If we list the oslo_db_options here, they # get emitted twice(!) in the "sample.conf" file. First under the # namespace "nova.conf" and second under the namespace "oslo.db". This # is due to the setting in file "etc/nova/nova-config-generator.conf". # As I think it is useful to have the "oslo.db" namespace information # in the "sample.conf" file, I omit the listing of the "oslo_db_options" # here. global _ENRICHED if not _ENRICHED: enrich_help_text(placement_db_opts) _ENRICHED = True return { placement_db_group: placement_db_opts, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/conf/opts.py0000664000175000017500000000523700000000000022100 0ustar00zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ This is the single point of entry to generate the sample configuration file for Placement. It collects all the necessary info from the other modules in this package. It is assumed that: * every other module in this package has a 'list_opts' function which return a dict where * the keys are strings which are the group names * the value of each key is a list of config options for that group * the placement.conf package doesn't have further packages with config options * this module is only used in the context of sample file generation """ import collections import importlib import os import pkgutil LIST_OPTS_FUNC_NAME = "list_opts" def _tupleize(dct): """Take the dict of options and convert to the 2-tuple format.""" return [(key, val) for key, val in dct.items()] def list_opts(): opts = collections.defaultdict(list) module_names = _list_module_names() imported_modules = _import_modules(module_names) _append_config_options(imported_modules, opts) return _tupleize(opts) def _list_module_names(): module_names = [] package_path = os.path.dirname(os.path.abspath(__file__)) for _, modname, ispkg in pkgutil.iter_modules(path=[package_path]): if modname == "opts" or ispkg: continue else: module_names.append(modname) return module_names def _import_modules(module_names): imported_modules = [] for modname in module_names: mod = importlib.import_module("placement.conf." + modname) if not hasattr(mod, LIST_OPTS_FUNC_NAME): msg = ("The module 'placement.conf.%s' should have a '%s' " "function which returns the config options." % (modname, LIST_OPTS_FUNC_NAME)) raise Exception(msg) else: imported_modules.append(mod) return imported_modules def _append_config_options(imported_modules, config_options): for mod in imported_modules: configs = mod.list_opts() for key, val in configs.items(): config_options[key].extend(val) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/conf/paths.py0000664000175000017500000000366400000000000022234 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # Copyright 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_config import cfg ALL_OPTS = [ cfg.StrOpt( 'pybasedir', default=os.path.abspath( os.path.join(os.path.dirname(__file__), '../../')), sample_default='', help=""" The directory where the Placement python modules are installed. This is the default path for other config options which need to persist Placement internal data. It is very unlikely that you need to change this option from its default value. Possible values: * The full path to a directory. Related options: * ``state_path`` """), cfg.StrOpt( 'state_path', default='$pybasedir', help=""" The top-level directory for maintaining state used in Placement. This directory is used to store Placement's internal state. It is used by some tests that have behaviors carried over from Nova. Possible values: * The full path to a directory. Defaults to value provided in ``pybasedir``. """), ] def state_path_def(*args): """Return an uninterpolated path relative to $state_path.""" return os.path.join('$state_path', *args) def register_opts(conf): conf.register_opts(ALL_OPTS) def list_opts(): return {"DEFAULT": ALL_OPTS} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/conf/placement.py0000664000175000017500000001356400000000000023065 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg DEFAULT_CONSUMER_MISSING_ID = '00000000-0000-0000-0000-000000000000' placement_group = cfg.OptGroup( 'placement', title='Placement Service Options', help="Configuration options for connecting to the placement API service") placement_opts = [ cfg.BoolOpt( 'randomize_allocation_candidates', default=False, help=""" If True, when limiting allocation candidate results, the results will be a random sampling of the full result set. The [placement]max_allocation_candidates config might limit the size of the full set used as the input of the sampling. If False, allocation candidates are returned in a deterministic but undefined order. That is, all things being equal, two requests for allocation candidates will return the same results in the same order; but no guarantees are made as to how that order is determined. """), cfg.StrOpt( 'incomplete_consumer_project_id', default=DEFAULT_CONSUMER_MISSING_ID, help=""" Early API microversions (<1.8) allowed creating allocations and not specifying a project or user identifier for the consumer. In cleaning up the data modeling, we no longer allow missing project and user information. If an older client makes an allocation, we'll use this in place of the information it doesn't provide. """), cfg.StrOpt( 'incomplete_consumer_user_id', default=DEFAULT_CONSUMER_MISSING_ID, help=""" Early API microversions (<1.8) allowed creating allocations and not specifying a project or user identifier for the consumer. In cleaning up the data modeling, we no longer allow missing project and user information. If an older client makes an allocation, we'll use this in place of the information it doesn't provide. """), cfg.IntOpt( 'allocation_conflict_retry_count', default=10, help=""" The number of times to retry, server-side, writing allocations when there is a resource provider generation conflict. Raising this value may be useful when many concurrent allocations to the same resource provider are expected. """), cfg.IntOpt( 'max_allocation_candidates', default=-1, help=""" The maximum number of allocation candidates placement generates for a single request. This is a global limit to avoid excessive memory use and query runtime. If set to -1 it means that the number of generated candidates are only limited by the number and structure of the resource providers and the content of the allocation_candidates query. Note that the limit param of the allocation_candidates query is applied after all the viable candidates are generated so that limit alone is not enough to restrict the runtime or memory consumption of the query. In a deployment with thousands of resource providers or if the deployment has wide and symmetric provider trees, i.e. there are multiple children providers under the same root having inventory from the same resource class (e.g. in case of nova's mdev GPU or PCI in Placement features) we recommend to tune this config option based on the memory available for the placement service and the client timeout setting on the client side. A good initial value could be around 100000. In a deployment with wide and symmetric provider trees we also recommend to change the [placement]allocation_candidates_generation_strategy to breadth-first. """), cfg.StrOpt( 'allocation_candidates_generation_strategy', default="depth-first", choices=("depth-first", "breadth-first"), help=""" Defines the order placement visits viable root providers during allocation candidate generation: * depth-first, generates all candidates from the first viable root provider before moving to the next. * breadth-first, generates candidates from viable roots in a round-robin fashion, creating one candidate from each viable root before creating the second candidate from the first root. If the deployment has wide and symmetric provider trees, i.e. there are multiple children providers under the same root having inventory from the same resource class (e.g. in case of nova's mdev GPU or PCI in Placement features) then the depth-first strategy with a max_allocation_candidates limit might produce candidates from a limited set of root providers. On the other hand breadth-first strategy will ensure that the candidates are returned from all viable roots in a balanced way. Both strategies produce the candidates in the API response in an undefined but deterministic order. That is, all things being equal, two requests for allocation candidates will return the same results in the same order; but no guarantees are made as to how that order is determined. """), ] # Duplicate log_options from oslo_service so that we don't have to import # that package into placement. # NOTE(cdent): Doing so ends up requiring eventlet and other unnecessary # packages for just this one setting. service_opts = [ cfg.BoolOpt('log_options', default=True, help='Enables or disables logging values of all registered ' 'options when starting a service (at DEBUG level).'), ] def register_opts(conf): conf.register_group(placement_group) conf.register_opts(placement_opts, group=placement_group) conf.register_opts(service_opts) def list_opts(): return {placement_group.name: placement_opts} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/context.py0000664000175000017500000000513000000000000021642 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_context import context from oslo_db.sqlalchemy import enginefacade from placement import attribute_cache from placement import exception from placement import policy @enginefacade.transaction_context_provider class RequestContext(context.RequestContext): def __init__(self, *args, **kwargs): self.config = kwargs.pop('config', None) self.ct_cache = attribute_cache.ConsumerTypeCache(self) self.rc_cache = attribute_cache.ResourceClassCache(self) self.trait_cache = attribute_cache.TraitCache(self) super(RequestContext, self).__init__(*args, **kwargs) def can(self, action, target=None, fatal=True): """Verifies that the given action is valid on the target in this context. :param action: string representing the action to be checked. :param target: As much information about the object being operated on as possible. The target argument should be a dict instance or an instance of a class that fully supports the Mapping abstract base class and deep copying. For object creation this should be a dictionary representing the location of the object e.g. ``{'project_id': context.project_id}``. If None, then this default target will be considered:: {'project_id': self.project_id, 'user_id': self.user_id} :param fatal: if False, will return False when an exception.PolicyNotAuthorized occurs. :raises placement.exception.PolicyNotAuthorized: if verification fails and fatal is True. :return: returns a non-False value (not necessarily "True") if authorized and False if not authorized and fatal is False. """ if target is None: target = {'project_id': self.project_id, 'user_id': self.user_id} try: return policy.authorize(self, action, target) except exception.PolicyNotAuthorized: if fatal: raise return False ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.244778 openstack_placement-13.0.0/placement/db/0000775000175000017500000000000000000000000020172 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/__init__.py0000664000175000017500000000000000000000000022271 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/constants.py0000664000175000017500000000237100000000000022563 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Useful db-related constants. In their own file so they can be imported cleanly.""" # The maximum value a signed INT type may have MAX_INT = 0x7FFFFFFF # NOTE(dosaboy): This is supposed to represent the maximum value that we can # place into a SQL single precision float so that we can check whether values # are oversize. Postgres and MySQL both define this as their max whereas Sqlite # uses dynamic typing so this would not apply. Different dbs react in different # ways to oversize values e.g. postgres will raise an exception while mysql # will round off the value. Nevertheless we may still want to know prior to # insert whether the value is oversize or not. SQL_SP_FLOAT_MAX = 3.40282e+38 ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2487779 openstack_placement-13.0.0/placement/db/sqlalchemy/0000775000175000017500000000000000000000000022334 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/__init__.py0000664000175000017500000000000000000000000024433 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2487779 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/0000775000175000017500000000000000000000000023730 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/env.py0000664000175000017500000000422400000000000025074 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from alembic import context from oslo_config import cfg from oslo_db import exception as db_exc from placement import conf from placement.db.sqlalchemy import models from placement import db_api as placement_db # add your model's MetaData object here # for 'autogenerate' support # from myapp import mymodel # target_metadata = mymodel.Base.metadata target_metadata = models.BASE.metadata # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option("my_important_option") # ... etc. def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ try: connectable = placement_db.get_placement_engine() except db_exc.CantStartEngineError: # We are being called from a context where the database hasn't been # configured so we need to set up Config and config the database. # This is usually the alembic command line. config = cfg.ConfigOpts() conf.register_opts(config) config([], project="placement", default_config_files=None) placement_db.configure(config) connectable = placement_db.get_placement_engine() with connectable.connect() as connection: context.configure( connection=connection, target_metadata=target_metadata) with context.begin_transaction(): context.run_migrations() if context.is_offline_mode(): raise Exception('offline mode disabled') else: run_migrations_online() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/script.py.mako0000664000175000017500000000075600000000000026544 0ustar00zuulzuul00000000000000"""${message} Revision ID: ${up_revision} Revises: ${down_revision | comma,n} Create Date: ${create_date} """ from alembic import op import sqlalchemy as sa ${imports if imports else ""} # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} branch_labels = ${repr(branch_labels)} depends_on = ${repr(depends_on)} def upgrade(): ${upgrades if upgrades else "pass"} def downgrade(): ${downgrades if downgrades else "pass"} ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2487779 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/0000775000175000017500000000000000000000000025600 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000021400000000000011452 xustar0000000000000000118 path=openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/422ece571366_add_consumer_types_table.py 22 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/422ece571366_add_consumer_types_0000664000175000017500000000341500000000000033314 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Add consumer_types table Revision ID: 422ece571366 Revises: b5c396305c25 Create Date: 2019-07-02 13:47:04.165692 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = '422ece571366' down_revision = 'b5c396305c25' branch_labels = None depends_on = None def upgrade(): op.create_table( 'consumer_types', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), autoincrement=True, nullable=False), sa.Column('name', sa.Unicode(length=255), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', name='uniq_consumer_types0name'), ) with op.batch_alter_table('consumers') as batch_op: batch_op.add_column( sa.Column( 'consumer_type_id', sa.Integer(), sa.ForeignKey('consumer_types.id', name='consumers_consumer_type_id_fkey'), nullable=True ) ) op.create_index( 'consumers_consumer_type_id_idx', 'consumers', ['consumer_type_id'], unique=False ) ././@PaxHeader0000000000000000000000000000022200000000000011451 xustar0000000000000000124 path=openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/611cd6dffd7b_block_on_null_root_provider_id.py 22 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/611cd6dffd7b_block_on_null_root_0000664000175000017500000000320600000000000033607 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Block on null root_provider_id Revision ID: 611cd6dffd7b Revises: b4ed3a175331 Create Date: 2019-05-09 13:57:04.874293 """ from alembic import context import sqlalchemy as sa from sqlalchemy import func as sqlfunc from sqlalchemy import MetaData, Table, select # revision identifiers, used by Alembic. revision = '611cd6dffd7b' down_revision = 'b4ed3a175331' branch_labels = None depends_on = None def upgrade(): connection = context.get_bind() meta = MetaData() meta.reflect(bind=connection) resource_providers = Table( 'resource_providers', meta, autoload_with=connection, ) query = select( sqlfunc.count(), ).select_from( resource_providers, ).where( resource_providers.c.root_provider_id == sa.null() ) if connection.scalar(query): raise Exception('There is at least one resource provider table ' 'record which is missing its root provider id. ' 'Run the "placement-manage db ' 'online_data_migrations" command.') ././@PaxHeader0000000000000000000000000000022600000000000011455 xustar0000000000000000128 path=openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/a082b8bb98d0_drop_redundant_indexes_for_unique_.py 22 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/a082b8bb98d0_drop_redundant_inde0000664000175000017500000000260600000000000033433 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Drop redundant indexes for unique constraints Revision ID: a082b8bb98d0 Revises: 422ece571366 Create Date: 2022-09-09 15:52:21.644040 """ from alembic import op # revision identifiers, used by Alembic. revision = 'a082b8bb98d0' down_revision = '422ece571366' branch_labels = None depends_on = None def upgrade(): op.drop_index('inventories_resource_provider_id_idx', table_name='inventories') op.drop_index('inventories_resource_provider_resource_class_idx', table_name='inventories') op.drop_index('ix_placement_aggregates_uuid', table_name='placement_aggregates') op.drop_index('resource_providers_name_idx', table_name='resource_providers') op.drop_index('resource_providers_uuid_idx', table_name='resource_providers') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/b4ed3a175331_initial.py0000664000175000017500000002172000000000000031413 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Initial Revision ID: b4ed3a175331 Revises: Create Date: 2018-10-19 18:27:55.950383 """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'b4ed3a175331' down_revision = None branch_labels = None depends_on = None def upgrade(): op.create_table( 'allocations', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('resource_provider_id', sa.Integer(), nullable=False), sa.Column('consumer_id', sa.String(length=36), nullable=False), sa.Column('resource_class_id', sa.Integer(), nullable=False), sa.Column('used', sa.Integer(), nullable=False), sa.PrimaryKeyConstraint('id') ) op.create_index( 'allocations_resource_provider_class_used_idx', 'allocations', ['resource_provider_id', 'resource_class_id', 'used'], unique=False) op.create_index( 'allocations_resource_class_id_idx', 'allocations', ['resource_class_id'], unique=False) op.create_index( 'allocations_consumer_id_idx', 'allocations', ['consumer_id'], unique=False) op.create_table( 'consumers', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), autoincrement=True, nullable=False), sa.Column('uuid', sa.String(length=36), nullable=False), sa.Column('project_id', sa.Integer(), nullable=False), sa.Column('user_id', sa.Integer(), nullable=False), sa.Column('generation', sa.Integer(), server_default=sa.text('0'), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('uuid', name='uniq_consumers0uuid'), ) op.create_index( 'consumers_project_id_user_id_uuid_idx', 'consumers', ['project_id', 'user_id', 'uuid'], unique=False) op.create_index( 'consumers_project_id_uuid_idx', 'consumers', ['project_id', 'uuid'], unique=False) op.create_table( 'inventories', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('resource_provider_id', sa.Integer(), nullable=False), sa.Column('resource_class_id', sa.Integer(), nullable=False), sa.Column('total', sa.Integer(), nullable=False), sa.Column('reserved', sa.Integer(), nullable=False), sa.Column('min_unit', sa.Integer(), nullable=False), sa.Column('max_unit', sa.Integer(), nullable=False), sa.Column('step_size', sa.Integer(), nullable=False), sa.Column('allocation_ratio', sa.Float(), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint( 'resource_provider_id', 'resource_class_id', name='uniq_inventories0resource_provider_resource_class'), ) op.create_index( 'inventories_resource_class_id_idx', 'inventories', ['resource_class_id'], unique=False) op.create_index( 'inventories_resource_provider_id_idx', 'inventories', ['resource_provider_id'], unique=False) op.create_index( 'inventories_resource_provider_resource_class_idx', 'inventories', ['resource_provider_id', 'resource_class_id'], unique=False) op.create_table( 'placement_aggregates', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), autoincrement=True, nullable=False), sa.Column('uuid', sa.String(length=36), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('uuid', name='uniq_placement_aggregates0uuid') ) op.create_index(op.f('ix_placement_aggregates_uuid'), 'placement_aggregates', ['uuid'], unique=False) op.create_table( 'projects', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), autoincrement=True, nullable=False), sa.Column('external_id', sa.String(length=255), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('external_id', name='uniq_projects0external_id'), ) op.create_table( 'resource_classes', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('name', sa.String(length=255), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', name='uniq_resource_classes0name'), ) op.create_table( 'resource_provider_aggregates', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('resource_provider_id', sa.Integer(), nullable=False), sa.Column('aggregate_id', sa.Integer(), nullable=False), sa.PrimaryKeyConstraint('resource_provider_id', 'aggregate_id'), ) op.create_index( 'resource_provider_aggregates_aggregate_id_idx', 'resource_provider_aggregates', ['aggregate_id'], unique=False) op.create_table( 'resource_providers', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), nullable=False), sa.Column('uuid', sa.String(length=36), nullable=False), sa.Column('name', sa.Unicode(length=200), nullable=True), sa.Column('generation', sa.Integer(), nullable=True), sa.Column('root_provider_id', sa.Integer(), nullable=True), sa.Column('parent_provider_id', sa.Integer(), nullable=True), sa.ForeignKeyConstraint(['parent_provider_id'], ['resource_providers.id']), sa.ForeignKeyConstraint(['root_provider_id'], ['resource_providers.id']), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', name='uniq_resource_providers0name'), sa.UniqueConstraint('uuid', name='uniq_resource_providers0uuid'), ) op.create_index( 'resource_providers_name_idx', 'resource_providers', ['name'], unique=False) op.create_index( 'resource_providers_parent_provider_id_idx', 'resource_providers', ['parent_provider_id'], unique=False) op.create_index( 'resource_providers_root_provider_id_idx', 'resource_providers', ['root_provider_id'], unique=False) op.create_index( 'resource_providers_uuid_idx', 'resource_providers', ['uuid'], unique=False) op.create_table( 'traits', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), autoincrement=True, nullable=False), sa.Column('name', sa.Unicode(length=255), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', name='uniq_traits0name'), ) op.create_table( 'users', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.Integer(), autoincrement=True, nullable=False), sa.Column('external_id', sa.String(length=255), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('external_id', name='uniq_users0external_id'), ) op.create_table( 'resource_provider_traits', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('trait_id', sa.Integer(), nullable=False), sa.Column('resource_provider_id', sa.Integer(), nullable=False), sa.ForeignKeyConstraint(['resource_provider_id'], ['resource_providers.id'], ), sa.ForeignKeyConstraint(['trait_id'], ['traits.id'], ), sa.PrimaryKeyConstraint('trait_id', 'resource_provider_id'), ) op.create_index( 'resource_provider_traits_resource_provider_trait_idx', 'resource_provider_traits', ['resource_provider_id', 'trait_id'], unique=False) ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/b5c396305c25_block_on_null_consumer.py 22 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic/versions/b5c396305c25_block_on_null_consu0000664000175000017500000000323300000000000033310 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Block on null consumer Revision ID: b5c396305c25 Revises: 611cd6dffd7b Create Date: 2019-06-11 16:30:04.114287 """ from alembic import context import sqlalchemy as sa from sqlalchemy import func as sqlfunc # revision identifiers, used by Alembic. revision = 'b5c396305c25' down_revision = '611cd6dffd7b' branch_labels = None depends_on = None def upgrade(): connection = context.get_bind() meta = sa.MetaData() meta.reflect(bind=connection) consumers = sa.Table('consumers', meta, autoload_with=connection) allocations = sa.Table('allocations', meta, autoload_with=connection) alloc_to_consumer = sa.outerjoin( allocations, consumers, allocations.c.consumer_id == consumers.c.uuid, ) sel = sa.select(sqlfunc.count()) sel = sel.select_from(alloc_to_consumer) sel = sel.where(consumers.c.id.is_(None)) if connection.scalar(sel): raise Exception('There is at least one allocation record which is ' 'missing a consumer record. Run the "placement-manage ' 'db online_data_migrations" command.') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/alembic.ini0000664000175000017500000000016700000000000024435 0ustar00zuulzuul00000000000000# A generic, single database configuration. [alembic] # path to migration scripts script_location = %(here)s/alembic ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/migration.py0000664000175000017500000000425500000000000024705 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import alembic from alembic import config as alembic_config from alembic import migration as alembic_migration from placement.db.sqlalchemy import models from placement import db_api as placement_db def get_engine(): return placement_db.get_placement_engine() def _alembic_config(): path = os.path.join(os.path.dirname(__file__), "alembic.ini") config = alembic_config.Config(path) return config def create_schema(engine=None): """Create schema from models, without a migration.""" base = models.BASE if engine is None: engine = get_engine() base.metadata.create_all(engine) def version(config=None, engine=None): """Current database version. :returns: Database version :rtype: string """ if engine is None: engine = get_engine() with engine.connect() as conn: context = alembic_migration.MigrationContext.configure(conn) return context.get_current_revision() def upgrade(revision, config=None): """Used for upgrading database. :param version: Desired database version :type version: string """ revision = revision or "head" config = config or _alembic_config() alembic.command.upgrade(config, revision) def stamp(version, config=None): """Used for stamp the database version. :param version: Database version to stamp :type version: string """ config = config or _alembic_config() alembic.command.stamp(config, version) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db/sqlalchemy/models.py0000664000175000017500000001774700000000000024211 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import models from oslo_log import log as logging from sqlalchemy import Column from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Float from sqlalchemy import ForeignKey from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy import orm from sqlalchemy import schema from sqlalchemy import String from sqlalchemy import Unicode LOG = logging.getLogger(__name__) class _Base(models.ModelBase, models.TimestampMixin): pass BASE = declarative_base(cls=_Base) class ResourceClass(BASE): """Represents the type of resource for an inventory or allocation.""" __tablename__ = 'resource_classes' __table_args__ = ( schema.UniqueConstraint("name", name="uniq_resource_classes0name"), ) id = Column(Integer, primary_key=True, nullable=False) name = Column(String(255), nullable=False) class ResourceProvider(BASE): """Represents a mapping to a providers of resources.""" __tablename__ = "resource_providers" __table_args__ = ( schema.UniqueConstraint('uuid', name='uniq_resource_providers0uuid'), Index('resource_providers_root_provider_id_idx', 'root_provider_id'), Index('resource_providers_parent_provider_id_idx', 'parent_provider_id'), schema.UniqueConstraint('name', name='uniq_resource_providers0name') ) id = Column(Integer, primary_key=True, nullable=False) uuid = Column(String(36), nullable=False) name = Column(Unicode(200), nullable=True) generation = Column(Integer, default=0) # Represents the root of the "tree" that the provider belongs to root_provider_id = Column( Integer, ForeignKey('resource_providers.id'), nullable=True) # The immediate parent provider of this provider, or NULL if there is no # parent. If parent_provider_id == NULL then root_provider_id == id parent_provider_id = Column( Integer, ForeignKey('resource_providers.id'), nullable=True) class Inventory(BASE): """Represents a quantity of available resource.""" __tablename__ = "inventories" __table_args__ = ( Index('inventories_resource_class_id_idx', 'resource_class_id'), schema.UniqueConstraint( 'resource_provider_id', 'resource_class_id', name='uniq_inventories0resource_provider_resource_class') ) id = Column(Integer, primary_key=True, nullable=False) resource_provider_id = Column(Integer, nullable=False) resource_class_id = Column(Integer, nullable=False) total = Column(Integer, nullable=False) reserved = Column(Integer, nullable=False) min_unit = Column(Integer, nullable=False) max_unit = Column(Integer, nullable=False) step_size = Column(Integer, nullable=False) allocation_ratio = Column(Float, nullable=False) resource_provider = orm.relationship( "ResourceProvider", primaryjoin=('Inventory.resource_provider_id == ' 'ResourceProvider.id'), foreign_keys=resource_provider_id) class Allocation(BASE): """A use of inventory.""" __tablename__ = "allocations" __table_args__ = ( Index('allocations_resource_provider_class_used_idx', 'resource_provider_id', 'resource_class_id', 'used'), Index('allocations_resource_class_id_idx', 'resource_class_id'), Index('allocations_consumer_id_idx', 'consumer_id') ) id = Column(Integer, primary_key=True, nullable=False) resource_provider_id = Column(Integer, nullable=False) consumer_id = Column(String(36), nullable=False) resource_class_id = Column(Integer, nullable=False) used = Column(Integer, nullable=False) resource_provider = orm.relationship( "ResourceProvider", primaryjoin=('Allocation.resource_provider_id == ' 'ResourceProvider.id'), foreign_keys=resource_provider_id) class ResourceProviderAggregate(BASE): """Associate a resource provider with an aggregate.""" __tablename__ = 'resource_provider_aggregates' __table_args__ = ( Index('resource_provider_aggregates_aggregate_id_idx', 'aggregate_id'), ) resource_provider_id = Column(Integer, primary_key=True, nullable=False) aggregate_id = Column(Integer, primary_key=True, nullable=False) class PlacementAggregate(BASE): """A grouping of resource providers.""" __tablename__ = 'placement_aggregates' __table_args__ = ( schema.UniqueConstraint("uuid", name="uniq_placement_aggregates0uuid"), ) id = Column(Integer, primary_key=True, autoincrement=True) uuid = Column(String(36)) class Trait(BASE): """Represents a trait.""" __tablename__ = "traits" __table_args__ = ( schema.UniqueConstraint('name', name='uniq_traits0name'), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) name = Column(Unicode(255), nullable=False) class ResourceProviderTrait(BASE): """Represents the relationship between traits and resource provider""" __tablename__ = "resource_provider_traits" __table_args__ = ( Index('resource_provider_traits_resource_provider_trait_idx', 'resource_provider_id', 'trait_id'), ) trait_id = Column(Integer, ForeignKey('traits.id'), primary_key=True, nullable=False) resource_provider_id = Column(Integer, ForeignKey('resource_providers.id'), primary_key=True, nullable=False) class Project(BASE): """The project is the Keystone project.""" __tablename__ = 'projects' __table_args__ = ( schema.UniqueConstraint( 'external_id', name='uniq_projects0external_id', ), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) external_id = Column(String(255), nullable=False) class User(BASE): """The user is the Keystone user.""" __tablename__ = 'users' __table_args__ = ( schema.UniqueConstraint( 'external_id', name='uniq_users0external_id', ), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) external_id = Column(String(255), nullable=False) class Consumer(BASE): """Represents a resource consumer.""" __tablename__ = 'consumers' __table_args__ = ( Index('consumers_project_id_uuid_idx', 'project_id', 'uuid'), Index('consumers_project_id_user_id_uuid_idx', 'project_id', 'user_id', 'uuid'), Index('consumers_consumer_type_id_idx', 'consumer_type_id'), schema.UniqueConstraint('uuid', name='uniq_consumers0uuid'), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) uuid = Column(String(36), nullable=False) project_id = Column(Integer, nullable=False) user_id = Column(Integer, nullable=False) generation = Column(Integer, nullable=False, server_default="0", default=0) consumer_type_id = Column( Integer, ForeignKey('consumer_types.id'), nullable=True) class ConsumerType(BASE): """Represents a consumer's type.""" __tablename__ = 'consumer_types' __table_args__ = ( schema.UniqueConstraint('name', name='uniq_consumer_types0name'), ) id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) name = Column(Unicode(255), nullable=False) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/db_api.py0000664000175000017500000000310500000000000021374 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database context manager for placement database connection.""" from oslo_db.sqlalchemy import enginefacade from oslo_log import log as logging from placement.util import run_once LOG = logging.getLogger(__name__) placement_context_manager = enginefacade.transaction_context() def _get_db_conf(conf_group): conf_dict = dict(conf_group.items()) # Remove the 'sync_on_startup' conf setting, enginefacade does not use it. # Use pop since it might not be present in testing situations and we # don't want to care here. conf_dict.pop('sync_on_startup', None) return conf_dict @run_once("TransactionFactory already started, not reconfiguring.", LOG.warning) def configure(conf): placement_context_manager.configure( **_get_db_conf(conf.placement_database)) def get_placement_engine(): return placement_context_manager.writer.get_engine() @enginefacade.transaction_context_provider class DbContext(object): """Stub class for db session handling outside of web requests.""" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/deploy.py0000664000175000017500000001473000000000000021460 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Deployment handling for Placement API.""" import os from microversion_parse import middleware as mp_middleware import oslo_middleware from oslo_utils import importutils from placement import auth from placement.db.sqlalchemy import migration from placement import db_api from placement import fault_wrap from placement import handler from placement import microversion from placement.objects import resource_class from placement.objects import trait from placement import policy from placement import requestlog from placement import util os_profiler = importutils.try_import('osprofiler.profiler') os_profiler_web = importutils.try_import('osprofiler.web') PROFILER_OUTPUT = os.environ.get('OS_WSGI_PROFILER') if PROFILER_OUTPUT: # If werkzeug is not available this raises ImportError and the # process will not continue. This is intentional: we do not want # to make a permanent dependency on werkzeug. from werkzeug.contrib import profiler def deploy(conf): """Assemble the middleware pipeline leading to the placement app.""" if conf.api.auth_strategy == 'noauth2': auth_middleware = auth.NoAuthMiddleware else: # Do not use 'oslo_config_project' param here as the conf # location may have been overridden earlier in the deployment # process with OS_PLACEMENT_CONFIG_DIR in wsgi.py. auth_middleware = auth.filter_factory( {}, oslo_config_config=conf) # Conditionally add CORS middleware based on setting 'allowed_origin' # in config. if conf.cors.allowed_origin: cors_middleware = oslo_middleware.CORS.factory( {}, **conf.cors) else: cors_middleware = None context_middleware = auth.PlacementKeystoneContext microversion_middleware = mp_middleware.MicroversionMiddleware fault_middleware = fault_wrap.FaultWrapper request_log = requestlog.RequestLog http_proxy_to_wsgi = oslo_middleware.HTTPProxyToWSGI if os_profiler_web and 'profiler' in conf and conf.profiler.enabled: osprofiler_middleware = os_profiler_web.WsgiMiddleware.factory( {}, **conf.profiler) else: osprofiler_middleware = None application = handler.PlacementHandler(config=conf) # If PROFILER_OUTPUT is set, generate per request profile reports # to the directory named therein. if PROFILER_OUTPUT: application = profiler.ProfilerMiddleware( application, profile_dir=PROFILER_OUTPUT) # configure microversion middleware in the old school way application = microversion_middleware( application, microversion.SERVICE_TYPE, microversion.VERSIONS, json_error_formatter=util.json_error_formatter) # NOTE(cdent): The ordering here is important. The list is ordered from the # inside out. For a single request, http_proxy_to_wsgi is called first to # identify the source address and then request_log is called (to extract # request context information and log the start of the request). If # osprofiler_middleware is present (see above), it is first. # fault_middleware is last in the stack described below, to wrap unexpected # exceptions in the placement application as valid HTTP 500 responses. Then # the request is passed to the microversion middleware (configured above) # and then finally to the application (the PlacementHandler, further # above). At that point the response ascends the middleware in the reverse # of the order the request went in. This order ensures that log messages # all see the same contextual information including request id and # authentication information. An individual piece of middleware is a # wrapper around the next and can do work on the way in, the way out, or # both. Which can be determined by looking at the `__call__` method in the # middleware. "In" activity is done prior to calling the next layer in the # stack (often `self.application`). "Out" activity is after, or in a # redefinition of the `start_response` method, commonly called # `replacement_start_response`. for middleware in (fault_middleware, context_middleware, auth_middleware, cors_middleware, request_log, http_proxy_to_wsgi, osprofiler_middleware, ): if middleware: application = middleware(application) # NOTE(mriedem): Ignore scope check UserWarnings from oslo.policy. if not conf.oslo_policy.enforce_scope: import warnings warnings.filterwarnings('ignore', message="Policy .* failed scope check", category=UserWarning) return application def update_database(conf): """Do any database updates required at process boot time, such as updating the traits table. """ if conf.placement_database.sync_on_startup: migration.upgrade('head') ctx = db_api.DbContext() trait.ensure_sync(ctx) resource_class.ensure_sync(ctx) # NOTE(cdent): Although project_name is no longer used because of the # resolution of https://bugs.launchpad.net/nova/+bug/1734491, loadapp() # is considered a public interface for the creation of a placement # WSGI app so must maintain its interface. The canonical placement WSGI # app is created by init_application in wsgi.py, but this is not # required and in fact can be limiting. loadapp() may be used from # fixtures or arbitrary WSGI frameworks and loaders. def loadapp(config, project_name=None): """WSGI application creator for placement. :param config: An oslo_config.cfg.ConfigOpts containing placement configuration. :param project_name: oslo_config project name. Ignored, preserved for backwards compatibility """ application = deploy(config) policy.init(config) update_database(config) return application ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/direct.py0000664000175000017500000000766200000000000021444 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Call any URI in the placement service directly without real HTTP. This is useful for those cases where processes wish to manipulate the Placement datastore but do not want to run Placement as a long running service. A PlacementDirect context manager is provided. Within that HTTP requests may be made as normal but they will not actually traverse a real socket. """ from unittest import mock from keystoneauth1 import adapter from keystoneauth1 import session from oslo_utils import uuidutils import requests from wsgi_intercept import interceptor from placement import deploy class PlacementDirect(interceptor.RequestsInterceptor): """Provide access to the placement service without real HTTP. wsgi-intercept is used to provide a keystoneauth1 Adapter that has access to an in-process placement service. This provides access to making changes to the placement database without requiring HTTP over the network - it remains in-process. Authentication to the service is turned off; admin access is assumed. Access is provided via a context manager which is responsible for turning the wsgi-intercept on and off, and setting and removing mocks required to keystoneauth1 to work around endpoint discovery. Example:: with PlacementDirect(cfg.CONF, latest_microversion=True) as client: allocations = client.get('/allocations/%s' % consumer) :param conf: An oslo config with the options used to configure the placement service (notably database connection string). :param latest_microversion: If True, API requests will use the latest microversion if not otherwise specified. If False (the default), the base microversion is the default. """ def __init__(self, conf, latest_microversion=False): conf.set_override('auth_strategy', 'noauth2', group='api') def app(): return deploy.loadapp(conf) self.url = 'http://%s/placement' % str(uuidutils.generate_uuid()) # Supply our own session so the wsgi-intercept can intercept # the right thing. request_session = requests.Session() headers = { 'x-auth-token': 'admin', } # TODO(efried): See below if latest_microversion: headers['OpenStack-API-Version'] = 'placement latest' self.adapter = adapter.Adapter( session.Session(auth=None, session=request_session, additional_headers=headers), service_type='placement', raise_exc=False) # TODO(efried): Figure out why this isn't working: # default_microversion='latest' if latest_microversion else None) self._mocked_endpoint = mock.patch( 'keystoneauth1.session.Session.get_endpoint', new=mock.Mock(return_value=self.url)) super(PlacementDirect, self).__init__(app, url=self.url) def __enter__(self): """Start the wsgi-intercept interceptor and keystone endpoint mock. A no auth ksa Adapter is provided to the context being managed. """ super(PlacementDirect, self).__enter__() self._mocked_endpoint.start() return self.adapter def __exit__(self, *exc): self._mocked_endpoint.stop() return super(PlacementDirect, self).__exit__(*exc) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/errors.py0000664000175000017500000000465300000000000021503 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Error code symbols to be used in structured JSON error responses. These are strings to be used in the 'code' attribute, as described by the API guideline on `errors`_. There must be only one instance of any string value and it should have only one associated constant SYMBOL. In a WSGI handler (representing the sole handler for an HTTP method and URI) each error condition should get a separate error code. Reusing an error code in a different handler is not just acceptable, but useful. For example 'placement.inventory.inuse' is meaningful and correct in both ``PUT /resource_providers/{uuid}/inventories`` and ``DELETE`` on the same URI. .. _errors: http://specs.openstack.org/openstack/api-wg/guidelines/errors.html """ # NOTE(cdent): This is the simplest thing that can possibly work, for now. # If it turns out we want to automate this, or put different resources in # different files, or otherwise change things, that's fine. The only thing # that needs to be maintained as the same are the strings that API end # users use. How they are created is completely fungible. # Do not change the string values. Once set, they are set. # Do not reuse string values. There should be only one symbol for any # value. # Don't forget to document new error codes in api-ref/source/errors.inc. DEFAULT = 'placement.undefined_code' INVENTORY_INUSE = 'placement.inventory.inuse' CONCURRENT_UPDATE = 'placement.concurrent_update' DUPLICATE_NAME = 'placement.duplicate_name' PROVIDER_IN_USE = 'placement.resource_provider.inuse' PROVIDER_CANNOT_DELETE_PARENT = ( 'placement.resource_provider.cannot_delete_parent') RESOURCE_PROVIDER_NOT_FOUND = 'placement.resource_provider.not_found' ILLEGAL_DUPLICATE_QUERYPARAM = 'placement.query.duplicate_key' # Failure of a post-schema value check QUERYPARAM_BAD_VALUE = 'placement.query.bad_value' QUERYPARAM_MISSING_VALUE = 'placement.query.missing_value' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/exception.py0000664000175000017500000001515200000000000022161 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Exceptions for use in the Placement API.""" from oslo_log import log as logging LOG = logging.getLogger(__name__) class _BaseException(Exception): """Base Exception To correctly use this class, inherit from it and define a 'msg_fmt' property. That msg_fmt will get printf'd with the keyword arguments provided to the constructor. """ msg_fmt = "An unknown exception occurred." def __init__(self, message=None, **kwargs): self.kwargs = kwargs if not message: try: message = self.msg_fmt % kwargs except Exception: # NOTE(melwitt): This is done in a separate method so it can be # monkey-patched during testing to make it a hard failure. self._log_exception() message = self.msg_fmt self.message = message super(_BaseException, self).__init__(message) def _log_exception(self): # kwargs doesn't match a variable in the message # log the issue and the kwargs LOG.exception('Exception in string format operation') for name, value in self.kwargs.items(): LOG.error("%s: %s" % (name, value)) # noqa def format_message(self): # Use the first argument to the python Exception object which # should be our full exception message, (see __init__). return self.args[0] class NotFound(_BaseException): msg_fmt = "Resource could not be found." class Exists(_BaseException): msg_fmt = "Resource already exists." class InvalidInventory(_BaseException): msg_fmt = ("Inventory for '%(resource_class)s' on " "resource provider '%(resource_provider)s' invalid.") class CannotDeleteParentResourceProvider(_BaseException): msg_fmt = ("Cannot delete resource provider that is a parent of " "another. Delete child providers first.") class ConcurrentUpdateDetected(_BaseException): msg_fmt = ("Another thread concurrently updated the data. " "Please retry your update") class ResourceProviderConcurrentUpdateDetected(ConcurrentUpdateDetected): msg_fmt = ("Another thread concurrently updated the resource provider " "data. Please retry your update") class ResourceProviderNotFound(NotFound): # Marker exception indicating that we've filtered down to zero possible # allocation candidates. Does not represent an API error; should only be # used internally: no results is a 200 with empty allocation_requests. msg_fmt = "No results are possible." class InvalidAllocationCapacityExceeded(InvalidInventory): msg_fmt = ("Unable to create allocation for '%(resource_class)s' on " "resource provider '%(resource_provider)s'. The requested " "amount would exceed the capacity.") class InvalidAllocationConstraintsViolated(InvalidInventory): msg_fmt = ("Unable to create allocation for '%(resource_class)s' on " "resource provider '%(resource_provider)s'. The requested " "amount would violate inventory constraints.") class InvalidInventoryCapacity(InvalidInventory): msg_fmt = ("Invalid inventory for '%(resource_class)s' on " "resource provider '%(resource_provider)s'. " "The reserved value is greater than or equal to total.") class InvalidInventoryCapacityReservedCanBeTotal(InvalidInventoryCapacity): msg_fmt = ("Invalid inventory for '%(resource_class)s' on " "resource provider '%(resource_provider)s'. " "The reserved value is greater than total.") # An exception with this name is used on both sides of the placement/ # nova interaction. class InventoryInUse(InvalidInventory): msg_fmt = ("Inventory for '%(resource_classes)s' on " "resource provider '%(resource_provider)s' in use.") class InventoryWithResourceClassNotFound(NotFound): msg_fmt = "No inventory of class %(resource_class)s found." class MaxDBRetriesExceeded(_BaseException): msg_fmt = ("Max retries of DB transaction exceeded attempting to " "perform %(action)s.") class ObjectActionError(_BaseException): msg_fmt = 'Object action %(action)s failed because: %(reason)s' class PolicyNotAuthorized(_BaseException): msg_fmt = "Policy does not allow %(action)s to be performed." class ResourceClassCannotDeleteStandard(_BaseException): msg_fmt = "Cannot delete standard resource class %(resource_class)s." class ResourceClassCannotUpdateStandard(_BaseException): msg_fmt = "Cannot update standard resource class %(resource_class)s." class ResourceClassExists(_BaseException): msg_fmt = "Resource class %(resource_class)s already exists." class ResourceClassInUse(_BaseException): msg_fmt = ("Cannot delete resource class %(resource_class)s. " "Class is in use in inventory.") class ResourceClassNotFound(NotFound): msg_fmt = "No such resource class %(name)s." class ResourceProviderInUse(_BaseException): msg_fmt = "Resource provider has allocations." class TraitCannotDeleteStandard(_BaseException): msg_fmt = "Cannot delete standard trait %(name)s." class TraitExists(_BaseException): msg_fmt = "The Trait %(name)s already exists" class TraitInUse(_BaseException): msg_fmt = "The trait %(name)s is in use by a resource provider." class TraitNotFound(NotFound): msg_fmt = "No such trait(s): %(name)s." class ProjectNotFound(NotFound): msg_fmt = "No such project(s): %(external_id)s." class ProjectExists(Exists): msg_fmt = "The project %(external_id)s already exists." class UserNotFound(NotFound): msg_fmt = "No such user(s): %(external_id)s." class UserExists(Exists): msg_fmt = "The user %(external_id)s already exists." class ConsumerNotFound(NotFound): msg_fmt = "No such consumer(s): %(uuid)s." class ConsumerExists(Exists): msg_fmt = "The consumer %(uuid)s already exists." class ConsumerTypeNotFound(NotFound): msg_fmt = "No such consumer type: %(name)s." class ConsumerTypeExists(Exists): msg_fmt = "The consumer type %(name)s already exists." ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/fault_wrap.py0000664000175000017500000000330500000000000022324 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Simple middleware for safely catching unexpected exceptions.""" # NOTE(cdent): This is a super simplified replacement for the nova # FaultWrapper, which does more than placement needs. from oslo_log import log as logging from webob import exc from placement import util LOG = logging.getLogger(__name__) class FaultWrapper(object): """Turn an uncaught exception into a status 500. Uncaught exceptions usually shouldn't happen, if it does it means there is a bug in the placement service, which should be fixed. """ def __init__(self, application): self.application = application def __call__(self, environ, start_response): try: return self.application(environ, start_response) except Exception as unexpected_exception: LOG.exception('Placement API unexpected error: %s', unexpected_exception) formatted_exception = exc.HTTPInternalServerError( str(unexpected_exception)) formatted_exception.json_formatter = util.json_error_formatter return formatted_exception.generate_response( environ, start_response) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handler.py0000664000175000017500000002203300000000000021574 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Handlers for placement API. Individual handlers are associated with URL paths in the ROUTE_DECLARATIONS dictionary. At the top level each key is a Routes compliant path. The value of that key is a dictionary mapping individual HTTP request methods to a Python function representing a simple WSGI application for satisfying that request. The ``make_map`` method processes ROUTE_DECLARATIONS to create a Routes.Mapper, including automatic handlers to respond with a 405 when a request is made against a valid URL with an invalid method. """ import routes import webob from oslo_log import log as logging from placement import exception from placement.handlers import aggregate from placement.handlers import allocation from placement.handlers import allocation_candidate from placement.handlers import inventory from placement.handlers import reshaper from placement.handlers import resource_class from placement.handlers import resource_provider from placement.handlers import root from placement.handlers import trait from placement.handlers import usage from placement import util LOG = logging.getLogger(__name__) # URLs and Handlers # NOTE(cdent): When adding URLs here, do not use regex patterns in # the path parameters (e.g. {uuid:[0-9a-zA-Z-]+}) as that will lead # to 404s that are controlled outside of the individual resources # and thus do not include specific information on the why of the 404. ROUTE_DECLARATIONS = { '/': { 'GET': root.home, }, # NOTE(cdent): This allows '/placement/' and '/placement' to # both work as the root of the service, which we probably want # for those situations where the service is mounted under a # prefix (as it is in devstack). While weird, an empty string is # a legit key in a dictionary and matches as desired in Routes. '': { 'GET': root.home, }, '/resource_classes': { 'GET': resource_class.list_resource_classes, 'POST': resource_class.create_resource_class }, '/resource_classes/{name}': { 'GET': resource_class.get_resource_class, 'PUT': resource_class.update_resource_class, 'DELETE': resource_class.delete_resource_class, }, '/resource_providers': { 'GET': resource_provider.list_resource_providers, 'POST': resource_provider.create_resource_provider }, '/resource_providers/{uuid}': { 'GET': resource_provider.get_resource_provider, 'DELETE': resource_provider.delete_resource_provider, 'PUT': resource_provider.update_resource_provider }, '/resource_providers/{uuid}/inventories': { 'GET': inventory.get_inventories, 'POST': inventory.create_inventory, 'PUT': inventory.set_inventories, 'DELETE': inventory.delete_inventories }, '/resource_providers/{uuid}/inventories/{resource_class}': { 'GET': inventory.get_inventory, 'PUT': inventory.update_inventory, 'DELETE': inventory.delete_inventory }, '/resource_providers/{uuid}/usages': { 'GET': usage.list_usages }, '/resource_providers/{uuid}/aggregates': { 'GET': aggregate.get_aggregates, 'PUT': aggregate.set_aggregates }, '/resource_providers/{uuid}/allocations': { 'GET': allocation.list_for_resource_provider, }, '/allocations': { 'POST': allocation.set_allocations, }, '/allocations/{consumer_uuid}': { 'GET': allocation.list_for_consumer, 'PUT': allocation.set_allocations_for_consumer, 'DELETE': allocation.delete_allocations, }, '/allocation_candidates': { 'GET': allocation_candidate.list_allocation_candidates, }, '/traits': { 'GET': trait.list_traits, }, '/traits/{name}': { 'GET': trait.get_trait, 'PUT': trait.put_trait, 'DELETE': trait.delete_trait, }, '/resource_providers/{uuid}/traits': { 'GET': trait.list_traits_for_resource_provider, 'PUT': trait.update_traits_for_resource_provider, 'DELETE': trait.delete_traits_for_resource_provider }, '/usages': { 'GET': usage.get_total_usages, }, '/reshaper': { 'POST': reshaper.reshape, }, } def dispatch(environ, start_response, mapper): """Find a matching route for the current request. If no match is found, raise a 404 response. If there is a matching route, but no matching handler for the given method, raise a 405. """ result = mapper.match(environ=environ) if result is None: raise webob.exc.HTTPNotFound( json_formatter=util.json_error_formatter) # We can't reach this code without action being present. handler = result.pop('action') environ['wsgiorg.routing_args'] = ((), result) return handler(environ, start_response) def handle_405(environ, start_response): """Return a 405 response when method is not allowed. If _methods are in routing_args, send an allow header listing the methods that are possible on the provided URL. """ _methods = util.wsgi_path_item(environ, '_methods') headers = {} if _methods: # Ensure allow header is a python 2 or 3 native string (thus # not unicode in python 2 but stay a string in python 3) # In the process done by Routes to save the allowed methods # to its routing table they become unicode in py2. headers['allow'] = str(_methods) # Use Exception class as WSGI Application. We don't want to raise here. response = webob.exc.HTTPMethodNotAllowed( 'The method specified is not allowed for this resource.', headers=headers, json_formatter=util.json_error_formatter) return response(environ, start_response) def make_map(declarations): """Process route declarations to create a Route Mapper.""" mapper = routes.Mapper() for route, targets in declarations.items(): allowed_methods = [] for method in targets: mapper.connect(route, action=targets[method], conditions=dict(method=[method])) allowed_methods.append(method) allowed_methods = ', '.join(allowed_methods) mapper.connect(route, action=handle_405, _methods=allowed_methods) return mapper class PlacementHandler(object): """Serve Placement API. Dispatch to handlers defined in ROUTE_DECLARATIONS. """ def __init__(self, **local_config): self._map = make_map(ROUTE_DECLARATIONS) self.config = local_config['config'] def __call__(self, environ, start_response): # set a reference to the oslo.config ConfigOpts on the RequestContext context = environ['placement.context'] context.config = self.config # Check that an incoming request with a content-length header # that is an integer > 0 and not empty, also has a content-type # header that is not empty. If not raise a 400. clen = environ.get('CONTENT_LENGTH') try: if clen and (int(clen) > 0) and not environ.get('CONTENT_TYPE'): raise webob.exc.HTTPBadRequest( 'content-type header required when content-length > 0', json_formatter=util.json_error_formatter) except ValueError: raise webob.exc.HTTPBadRequest( 'content-length header must be an integer', json_formatter=util.json_error_formatter) try: return dispatch(environ, start_response, self._map) # Trap the NotFound exceptions raised by the objects used # with the API and transform them into webob.exc.HTTPNotFound. except exception.NotFound as exc: raise webob.exc.HTTPNotFound( exc, json_formatter=util.json_error_formatter) except exception.PolicyNotAuthorized as exc: raise webob.exc.HTTPForbidden( exc.format_message(), json_formatter=util.json_error_formatter) # Remaining uncaught exceptions will rise first to the Microversion # middleware, where any WebOb generated exceptions will be caught and # transformed into legit HTTP error responses (with microversion # headers added), and then to the FaultWrapper middleware which will # catch anything else and transform them into 500 responses. # NOTE(cdent): There should be very few uncaught exceptions which are # not WebOb exceptions at this stage as the handlers are contained by # the wsgify decorator which will transform those exceptions to # responses itself. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2487779 openstack_placement-13.0.0/placement/handlers/0000775000175000017500000000000000000000000021405 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/__init__.py0000664000175000017500000000000000000000000023504 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/aggregate.py0000664000175000017500000001235400000000000023712 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Aggregate handlers for Placement API.""" from oslo_db import exception as db_exc from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import webob from placement import errors from placement import exception from placement import microversion from placement.objects import resource_provider as rp_obj from placement.policies import aggregate as policies from placement.schemas import aggregate as schema from placement import util from placement import wsgi_wrapper _INCLUDE_GENERATION_VERSION = (1, 19) def _send_aggregates(req, resource_provider, aggregate_uuids): want_version = req.environ[microversion.MICROVERSION_ENVIRON] response = req.response response.status = 200 payload = _serialize_aggregates(aggregate_uuids) if want_version.matches(min_version=_INCLUDE_GENERATION_VERSION): payload['resource_provider_generation'] = resource_provider.generation response.body = encodeutils.to_utf8( jsonutils.dumps(payload)) response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' # We never get an aggregate itself, we get the list of aggregates # that are associated with a resource provider. We don't record the # time when that association was made and the time when an aggregate # uuid was created is not relevant, so here we punt and use utcnow. req.response.last_modified = timeutils.utcnow(with_timezone=True) return response def _serialize_aggregates(aggregate_uuids): return {'aggregates': aggregate_uuids} def _set_aggregates(resource_provider, aggregate_uuids, increment_generation=False): """Set aggregates for the resource provider. If increment generation is true, the resource provider generation will be incremented if possible. If that fails (because something else incremented the generation in another thread), a ConcurrentUpdateDetected will be raised. """ # NOTE(cdent): It's not clear what the DBDuplicateEntry handling # is doing here, set_aggregates already handles that, but I'm leaving # it here because it was already there. try: resource_provider.set_aggregates( aggregate_uuids, increment_generation=increment_generation) except exception.ConcurrentUpdateDetected as exc: raise webob.exc.HTTPConflict( 'Update conflict: %(error)s' % {'error': exc}, comment=errors.CONCURRENT_UPDATE) except db_exc.DBDuplicateEntry as exc: raise webob.exc.HTTPConflict( 'Update conflict: %(error)s' % {'error': exc}) @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') @microversion.version_handler('1.1') def get_aggregates(req): """GET a list of aggregates associated with a resource provider. If the resource provider does not exist return a 404. On success return a 200 with an application/json body containing a list of aggregate uuids. """ context = req.environ['placement.context'] context.can(policies.LIST) uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) aggregate_uuids = resource_provider.get_aggregates() return _send_aggregates(req, resource_provider, aggregate_uuids) @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') @microversion.version_handler('1.1') def set_aggregates(req): context = req.environ['placement.context'] context.can(policies.UPDATE) want_version = req.environ[microversion.MICROVERSION_ENVIRON] consider_generation = want_version.matches( min_version=_INCLUDE_GENERATION_VERSION) put_schema = schema.PUT_AGGREGATES_SCHEMA_V1_1 if consider_generation: put_schema = schema.PUT_AGGREGATES_SCHEMA_V1_19 uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) data = util.extract_json(req.body, put_schema) if consider_generation: # Check for generation conflict rp_gen = data['resource_provider_generation'] if resource_provider.generation != rp_gen: raise webob.exc.HTTPConflict( "Resource provider's generation already changed. Please " "update the generation and try again.", comment=errors.CONCURRENT_UPDATE) aggregate_uuids = data['aggregates'] else: aggregate_uuids = data _set_aggregates(resource_provider, aggregate_uuids, increment_generation=consider_generation) return _send_aggregates(req, resource_provider, aggregate_uuids) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/allocation.py0000664000175000017500000006364300000000000024120 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for setting and deleting allocations.""" import collections import uuid from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import timeutils from oslo_utils import uuidutils import webob from placement import db_api from placement import errors from placement import exception from placement.handlers import util as data_util from placement import microversion from placement.objects import allocation as alloc_obj from placement.objects import resource_provider as rp_obj from placement.policies import allocation as policies from placement.schemas import allocation as schema from placement import util from placement import wsgi_wrapper LOG = logging.getLogger(__name__) def _last_modified_from_allocations(allocations, want_version): """Given a set of allocation objects, returns the last modified timestamp. """ # NOTE(cdent): The last_modified for an allocation will always be # based off the created_at column because allocations are only # ever inserted, never updated. last_modified = None # Only calculate last-modified if we are using a microversion that # supports it. get_last_modified = want_version and want_version.matches((1, 15)) for allocation in allocations: if get_last_modified: last_modified = util.pick_last_modified(last_modified, allocation) last_modified = last_modified or timeutils.utcnow(with_timezone=True) return last_modified def _serialize_allocations_for_consumer(context, allocations, want_version): """Turn a list of allocations into a dict by resource provider uuid. { 'allocations': { RP_UUID_1: { 'generation': GENERATION, 'resources': { 'DISK_GB': 4, 'VCPU': 2 } }, RP_UUID_2: { 'generation': GENERATION, 'resources': { 'DISK_GB': 6, 'VCPU': 3 } } }, # project_id and user_id are added with microverion 1.12 'project_id': PROJECT_ID, 'user_id': USER_ID, # Generation for consumer >= 1.28 'consumer_generation': 1 # Consumer Type for consumer >= 1.38 'consumer_type': INSTANCE } """ allocation_data = collections.defaultdict(dict) for allocation in allocations: key = allocation.resource_provider.uuid if 'resources' not in allocation_data[key]: allocation_data[key]['resources'] = {} resource_class = allocation.resource_class allocation_data[key]['resources'][resource_class] = allocation.used generation = allocation.resource_provider.generation allocation_data[key]['generation'] = generation result = {'allocations': allocation_data} if allocations and want_version.matches((1, 12)): # We're looking at a list of allocations by consumer id so project and # user are consistent across the list consumer = allocations[0].consumer project_id = consumer.project.external_id user_id = consumer.user.external_id result['project_id'] = project_id result['user_id'] = user_id show_consumer_gen = want_version.matches((1, 28)) if show_consumer_gen: result['consumer_generation'] = consumer.generation show_consumer_type = want_version.matches((1, 38)) if show_consumer_type: con_name = context.ct_cache.string_from_id( consumer.consumer_type_id) result['consumer_type'] = con_name return result def _serialize_allocations_for_resource_provider(allocations, resource_provider, want_version): """Turn a list of allocations into a dict by consumer id. {'resource_provider_generation': GENERATION, 'allocations': CONSUMER_ID_1: { 'resources': { 'DISK_GB': 4, 'VCPU': 2 }, # Generation for consumer >= 1.28 'consumer_generation': 0 }, CONSUMER_ID_2: { 'resources': { 'DISK_GB': 6, 'VCPU': 3 }, # Generation for consumer >= 1.28 'consumer_generation': 0 } } """ show_consumer_gen = want_version.matches((1, 28)) allocation_data = collections.defaultdict(dict) for allocation in allocations: key = allocation.consumer.uuid if 'resources' not in allocation_data[key]: allocation_data[key]['resources'] = {} resource_class = allocation.resource_class allocation_data[key]['resources'][resource_class] = allocation.used if show_consumer_gen: consumer_gen = None if allocation.consumer is not None: consumer_gen = allocation.consumer.generation allocation_data[key]['consumer_generation'] = consumer_gen result = {'allocations': allocation_data} result['resource_provider_generation'] = resource_provider.generation return result # TODO(cdent): Extracting this is useful, for reuse by reshaper code, # but having it in this file seems wrong, however, since it uses # _new_allocations it's being left here for now. We need a place for shared # handler code, but util.py is already too big and too diverse. def create_allocation_list(context, data, consumers): """Create a list of Allocations based on provided data. :param context: The placement context. :param data: A dictionary of multiple allocations by consumer uuid. :param consumers: A dictionary, keyed by consumer UUID, of Consumer objects :return: A list of Allocation objects. :raises: `webob.exc.HTTPBadRequest` if a resource provider included in the allocations does not exist. """ allocation_objects = [] for consumer_uuid in data: allocations = data[consumer_uuid]['allocations'] consumer = consumers[consumer_uuid] if allocations: rp_objs = _resource_providers_by_uuid(context, allocations.keys()) for resource_provider_uuid in allocations: resource_provider = rp_objs[resource_provider_uuid] resources = allocations[resource_provider_uuid]['resources'] new_allocations = _new_allocations(context, resource_provider, consumer, resources) allocation_objects.extend(new_allocations) else: # The allocations are empty, which means wipe them out. # Internal to the allocation object this is signalled by a # used value of 0. allocations = alloc_obj.get_all_by_consumer_id( context, consumer_uuid) for allocation in allocations: allocation.used = 0 allocation_objects.append(allocation) return allocation_objects def inspect_consumers(context, data, want_version): """Look at consumer data in allocations and create consumers as needed. Keep a record of the consumers that are created in case they need to be removed later. If an exception is raised by ensure_consumer, commonly HTTPConflict but also anything else, the newly created consumers will be deleted and the exception reraised to the caller. :param context: The placement context. :param data: A dictionary of multiple allocations by consumer uuid. :param want_version: the microversion matcher. :return: A 3-tuple of (a dict of all consumer objects (by consumer uuid), a list of those consumer objects which are new, a dict of RequestAttr objects (by consumer_uuid)) """ # First, ensure that all consumers referenced in the payload actually # exist. And if not, create them. Keep a record of auto-created consumers # so we can clean them up if the end allocation replace_all() fails. consumers = {} # dict of Consumer objects, keyed by consumer UUID new_consumers_created = [] # Save requested attributes in order to do an update later in the same # database transaction as AllocationList.replace_all() so that rollbacks # can happen properly. Consumer table updates are guarded by the # generation, so we can't necessarily save all of the original attribute # values and write them back into the table in the event of an exception. # If the generation doesn't match, Consumer.update() is a no-op. requested_attrs = {} for consumer_uuid in data: project_id = data[consumer_uuid]['project_id'] user_id = data[consumer_uuid]['user_id'] consumer_generation = data[consumer_uuid].get('consumer_generation') consumer_type = data[consumer_uuid].get('consumer_type') try: consumer, new_consumer_created, request_attr = ( data_util.ensure_consumer( context, consumer_uuid, project_id, user_id, consumer_generation, consumer_type, want_version)) if new_consumer_created: new_consumers_created.append(consumer) consumers[consumer_uuid] = consumer requested_attrs[consumer_uuid] = request_attr except Exception: # If any errors (for instance, a consumer generation conflict) # occur when ensuring consumer records above, make sure we delete # any auto-created consumers. with excutils.save_and_reraise_exception(): delete_consumers(new_consumers_created) return consumers, new_consumers_created, requested_attrs @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def list_for_consumer(req): """List allocations associated with a consumer.""" context = req.environ['placement.context'] context.can(policies.ALLOC_LIST) consumer_id = util.wsgi_path_item(req.environ, 'consumer_uuid') want_version = req.environ[microversion.MICROVERSION_ENVIRON] # NOTE(cdent): There is no way for a 404 to be returned here, # only an empty result. We do not have a way to validate a # consumer id. allocations = alloc_obj.get_all_by_consumer_id(context, consumer_id) output = _serialize_allocations_for_consumer( context, allocations, want_version) last_modified = _last_modified_from_allocations(allocations, want_version) allocations_json = jsonutils.dumps(output) response = req.response response.status = 200 response.body = encodeutils.to_utf8(allocations_json) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def list_for_resource_provider(req): """List allocations associated with a resource provider.""" # TODO(cdent): On a shared resource provider (for example a # giant disk farm) this list could get very long. At the moment # we have no facility for limiting the output. Given that we are # using a dict of dicts for the output we are potentially limiting # ourselves in terms of sorting and filtering. context = req.environ['placement.context'] context.can(policies.RP_ALLOC_LIST) want_version = req.environ[microversion.MICROVERSION_ENVIRON] uuid = util.wsgi_path_item(req.environ, 'uuid') # confirm existence of resource provider so we get a reasonable # 404 instead of empty list try: rp = rp_obj.ResourceProvider.get_by_uuid(context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( "Resource provider '%(rp_uuid)s' not found: %(error)s" % {'rp_uuid': uuid, 'error': exc}) allocs = alloc_obj.get_all_by_resource_provider(context, rp) output = _serialize_allocations_for_resource_provider( allocs, rp, want_version) last_modified = _last_modified_from_allocations(allocs, want_version) allocations_json = jsonutils.dumps(output) response = req.response response.status = 200 response.body = encodeutils.to_utf8(allocations_json) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response def _resource_providers_by_uuid(ctx, rp_uuids): """Helper method that returns a dict, keyed by resource provider UUID, of ResourceProvider objects. :param ctx: The placement context. :param rp_uuids: iterable of UUIDs for providers to fetch. :raises: `webob.exc.HTTPBadRequest` if any of the UUIDs do not refer to an existing resource provider. """ res = {} for rp_uuid in rp_uuids: # TODO(jaypipes): Clearly, this is not efficient to do one query for # each resource provider UUID in the allocations instead of doing a # single query for all the UUIDs. However, since # rp_obj.get_all_by_filters() is way too complicated for # this purpose and doesn't raise NotFound anyway, we'll do this. # Perhaps consider adding a rp_obj.get_all_by_uuids() later on? try: res[rp_uuid] = rp_obj.ResourceProvider.get_by_uuid(ctx, rp_uuid) except exception.NotFound: raise webob.exc.HTTPBadRequest( "Allocation for resource provider '%(rp_uuid)s' " "that does not exist." % {'rp_uuid': rp_uuid}) return res def _new_allocations(context, resource_provider, consumer, resources): """Create new allocation objects for a set of resources Returns a list of Allocation objects :param context: The placement context. :param resource_provider: The resource provider that has the resources. :param consumer: The Consumer object consuming the resources. :param resources: A dict of resource classes and values. """ allocations = [] for resource_class in resources: allocation = alloc_obj.Allocation( resource_provider=resource_provider, consumer=consumer, resource_class=resource_class, used=resources[resource_class]) allocations.append(allocation) return allocations def delete_consumers(consumers): """Helper function that deletes any consumer object supplied to it :param consumers: iterable of Consumer objects to delete """ for consumer in consumers: try: consumer.delete() LOG.debug("Deleted auto-created consumer with consumer UUID " "%s after failed allocation", consumer.uuid) except Exception as err: LOG.warning("Got an exception when deleting auto-created " "consumer with UUID %s: %s", consumer.uuid, err) def _set_allocations_for_consumer(req, schema): context = req.environ['placement.context'] context.can(policies.ALLOC_UPDATE) consumer_uuid = util.wsgi_path_item(req.environ, 'consumer_uuid') if not uuidutils.is_uuid_like(consumer_uuid): raise webob.exc.HTTPBadRequest( 'Malformed consumer_uuid: %(consumer_uuid)s' % {'consumer_uuid': consumer_uuid}) consumer_uuid = str(uuid.UUID(consumer_uuid)) data = util.extract_json(req.body, schema) allocation_data = data['allocations'] # Normalize allocation data to dict. want_version = req.environ[microversion.MICROVERSION_ENVIRON] if not want_version.matches((1, 12)): allocations_dict = {} # Allocation are list-ish, transform to dict-ish for allocation in allocation_data: resource_provider_uuid = allocation['resource_provider']['uuid'] allocations_dict[resource_provider_uuid] = { 'resources': allocation['resources'] } allocation_data = allocations_dict allocation_objects = [] # Consumer object saved in case we need to delete the auto-created consumer # record consumer = None # Whether we created a new consumer record created_new_consumer = False # Get or create the project, user, consumer, and consumer type. # This needs to be done in separate database transactions so that the # records can be read after a create collision due to a racing request. consumer, created_new_consumer, request_attr = ( data_util.ensure_consumer( context, consumer_uuid, data.get('project_id'), data.get('user_id'), data.get('consumer_generation'), data.get('consumer_type'), want_version)) if not allocation_data: # The allocations are empty, which means wipe them out. Internal # to the allocation object this is signalled by a used value of 0. # We verified the consumer's generation in util.ensure_consumer() # NOTE(jaypipes): This will only occur 1.28+. The JSONSchema will # prevent an empty allocations object from being passed when there is # no consumer generation, so this is safe to do. allocations = alloc_obj.get_all_by_consumer_id(context, consumer_uuid) for allocation in allocations: allocation.used = 0 allocation_objects.append(allocation) else: # If the body includes an allocation for a resource provider # that does not exist, raise a 400. rp_objs = _resource_providers_by_uuid(context, allocation_data.keys()) for resource_provider_uuid, allocation in allocation_data.items(): resource_provider = rp_objs[resource_provider_uuid] new_allocations = _new_allocations(context, resource_provider, consumer, allocation['resources']) allocation_objects.extend(new_allocations) @db_api.placement_context_manager.writer def _update_consumers_and_create_allocations(ctx): # Update consumer attributes if requested attributes are different. # NOTE(melwitt): This will not raise ConcurrentUpdateDetected, that # happens later in AllocationList.replace_all() data_util.update_consumers([consumer], {consumer_uuid: request_attr}) alloc_obj.replace_all(ctx, allocation_objects) LOG.debug("Successfully wrote allocations %s", allocation_objects) def _create_allocations(): try: # NOTE(melwitt): Group the consumer and allocation database updates # in a single transaction so that updates get rolled back # automatically in the event of a consumer generation conflict. _update_consumers_and_create_allocations(context) except Exception: with excutils.save_and_reraise_exception(): if created_new_consumer: delete_consumers([consumer]) try: _create_allocations() # InvalidInventory is a parent for several exceptions that # indicate either that Inventory is not present, or that # capacity limits have been exceeded. except exception.NotFound as exc: raise webob.exc.HTTPBadRequest( "Unable to allocate inventory for consumer %(consumer_uuid)s: " "%(error)s" % {'consumer_uuid': consumer_uuid, 'error': exc}) except exception.InvalidInventory as exc: raise webob.exc.HTTPConflict( 'Unable to allocate inventory: %(error)s' % {'error': exc}) except exception.ConcurrentUpdateDetected as exc: raise webob.exc.HTTPConflict( 'Inventory and/or allocations changed while attempting to ' 'allocate: %(error)s' % {'error': exc}, comment=errors.CONCURRENT_UPDATE) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.0', '1.7') @util.require_content('application/json') def set_allocations_for_consumer(req): return _set_allocations_for_consumer(req, schema.ALLOCATION_SCHEMA) @wsgi_wrapper.PlacementWsgify # noqa @microversion.version_handler('1.8', '1.11') @util.require_content('application/json') def set_allocations_for_consumer(req): # noqa return _set_allocations_for_consumer(req, schema.ALLOCATION_SCHEMA_V1_8) @wsgi_wrapper.PlacementWsgify # noqa @microversion.version_handler('1.12', '1.27') @util.require_content('application/json') def set_allocations_for_consumer(req): # noqa return _set_allocations_for_consumer(req, schema.ALLOCATION_SCHEMA_V1_12) @wsgi_wrapper.PlacementWsgify # noqa @microversion.version_handler('1.28', '1.33') @util.require_content('application/json') def set_allocations_for_consumer(req): # noqa return _set_allocations_for_consumer(req, schema.ALLOCATION_SCHEMA_V1_28) @wsgi_wrapper.PlacementWsgify # noqa @microversion.version_handler('1.34', '1.37') @util.require_content('application/json') def set_allocations_for_consumer(req): # noqa return _set_allocations_for_consumer(req, schema.ALLOCATION_SCHEMA_V1_34) @wsgi_wrapper.PlacementWsgify # noqa @microversion.version_handler('1.38') @util.require_content('application/json') def set_allocations_for_consumer(req): # noqa return _set_allocations_for_consumer(req, schema.ALLOCATION_SCHEMA_V1_38) @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.13') @util.require_content('application/json') def set_allocations(req): context = req.environ['placement.context'] context.can(policies.ALLOC_MANAGE) want_version = req.environ[microversion.MICROVERSION_ENVIRON] want_schema = schema.POST_ALLOCATIONS_V1_13 if want_version.matches((1, 28)): want_schema = schema.POST_ALLOCATIONS_V1_28 if want_version.matches((1, 34)): want_schema = schema.POST_ALLOCATIONS_V1_34 if want_version.matches((1, 38)): want_schema = schema.POST_ALLOCATIONS_V1_38 data = util.extract_json(req.body, want_schema) consumers, new_consumers_created, requested_attrs = inspect_consumers( context, data, want_version) # Create a sequence of allocation objects to be used in one # alloc_obj.replace_all() call, which will mean all the changes happen # within a single transaction and with resource provider and consumer # generations (if applicable) check all in one go. allocations = create_allocation_list(context, data, consumers) @db_api.placement_context_manager.writer def _update_consumers_and_create_allocations(ctx): # Update consumer attributes if requested attributes are different. # NOTE(melwitt): This will not raise ConcurrentUpdateDetected, that # happens later in AllocationList.replace_all() data_util.update_consumers(consumers.values(), requested_attrs) alloc_obj.replace_all(ctx, allocations) LOG.debug("Successfully wrote allocations %s", allocations) def _create_allocations(): try: # NOTE(melwitt): Group the consumer and allocation database updates # in a single transaction so that updates get rolled back # automatically in the event of a consumer generation conflict. _update_consumers_and_create_allocations(context) except Exception: with excutils.save_and_reraise_exception(): delete_consumers(new_consumers_created) try: _create_allocations() except exception.NotFound as exc: raise webob.exc.HTTPBadRequest( "Unable to allocate inventory %(error)s" % {'error': exc}) except exception.InvalidInventory as exc: # InvalidInventory is a parent for several exceptions that # indicate either that Inventory is not present, or that # capacity limits have been exceeded. raise webob.exc.HTTPConflict( 'Unable to allocate inventory: %(error)s' % {'error': exc}) except exception.ConcurrentUpdateDetected as exc: raise webob.exc.HTTPConflict( 'Inventory and/or allocations changed while attempting to ' 'allocate: %(error)s' % {'error': exc}, comment=errors.CONCURRENT_UPDATE) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify def delete_allocations(req): context = req.environ['placement.context'] context.can(policies.ALLOC_DELETE) consumer_uuid = util.wsgi_path_item(req.environ, 'consumer_uuid') allocations = alloc_obj.get_all_by_consumer_id(context, consumer_uuid) if allocations: try: alloc_obj.delete_all(context, allocations) # NOTE(pumaranikar): Following NotFound exception added in the case # when allocation is deleted from allocations list by some other # activity. In that case, delete_all() will throw a NotFound exception. except exception.NotFound as exc: raise webob.exc.HTTPNotFound( "Allocation for consumer with id %(id)s not found. error: " "%(error)s" % {'id': consumer_uuid, 'error': exc}) else: raise webob.exc.HTTPNotFound( "No allocations for consumer '%(consumer_uuid)s'" % {'consumer_uuid': consumer_uuid}) LOG.debug("Successfully deleted allocations %s", allocations) req.response.status = 204 req.response.content_type = None return req.response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/allocation_candidate.py0000664000175000017500000002442300000000000026105 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for getting allocation candidates.""" import collections from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import webob from placement import exception from placement import lib from placement import microversion from placement.objects import allocation_candidate as ac_obj from placement.policies import allocation_candidate as policies from placement.schemas import allocation_candidate as schema from placement import util from placement import wsgi_wrapper # The microversions at which the schema used to validate # query parameters to GET /allocation_candidates differs. _GET_SCHEMA_MICROVERSIONS = [ (1, 36), (1, 35), (1, 33), (1, 31), (1, 25), (1, 21), (1, 17), (1, 16) ] def _transform_allocation_requests_dict(alloc_reqs, want_version): """Turn supplied list of AllocationRequest objects into a list of allocations dicts keyed by resource provider uuid of resources involved in the allocation request. The returned results are intended to be used as the body of a PUT /allocations/{consumer_uuid} HTTP request at micoversion 1.12 (and beyond). The JSON objects look like the following: [ { "allocations": { $rp_uuid1: { "resources": { "MEMORY_MB": 512 ... } }, $rp_uuid2: { "resources": { "DISK_GB": 1024 ... } } }, # If microversion >=1.34 then map suffixes to providers. "mappings": { "_COMPUTE": [$rp_uuid2], "": [$rp_uuid1] }, }, ... ] """ results = [] for ar in alloc_reqs: # A default dict of {$rp_uuid: "resources": {}) rp_resources = collections.defaultdict(lambda: dict(resources={})) for rr in ar.resource_requests: res_dict = rp_resources[rr.resource_provider.uuid]['resources'] res_dict[rr.resource_class] = rr.amount result = dict(allocations=rp_resources) if want_version.matches((1, 34)): result['mappings'] = ar.mappings results.append(result) return results def _transform_allocation_requests_list(alloc_reqs): """Turn supplied list of AllocationRequest objects into a list of dicts of resources involved in the allocation request. The returned results is intended to be able to be used as the body of a PUT /allocations/{consumer_uuid} HTTP request, prior to microversion 1.12, so therefore we return a list of JSON objects that looks like the following: [ { "allocations": [ { "resource_provider": { "uuid": $rp_uuid, } "resources": { $resource_class: $requested_amount, ... }, }, ... ], }, ... ] """ results = [] for ar in alloc_reqs: provider_resources = collections.defaultdict(dict) for rr in ar.resource_requests: res_dict = provider_resources[rr.resource_provider.uuid] res_dict[rr.resource_class] = rr.amount allocs = [ { "resource_provider": { "uuid": rp_uuid, }, "resources": resources, } for rp_uuid, resources in provider_resources.items() ] alloc = { "allocations": allocs } results.append(alloc) return results def _transform_provider_summaries(p_sums, requests, want_version): """Turn supplied list of ProviderSummary objects into a dict, keyed by resource provider UUID, of dicts of provider and inventory information. The traits only show up when `want_version` is 1.17 or newer. All the resource classes are shown when `want_version` is 1.27 or newer while only requested resources are included in the `provider_summaries` for older versions. The parent and root provider uuids only show up when `want_version` is 1.29 or newer. { RP_UUID_1: { 'resources': { 'DISK_GB': { 'capacity': 100, 'used': 0, }, 'VCPU': { 'capacity': 4, 'used': 0, } }, # traits shows up from microversion 1.17 'traits': [ 'HW_CPU_X86_AVX512F', 'HW_CPU_X86_AVX512CD' ] # parent/root provider uuids show up from microversion 1.29 parent_provider_uuid: null, root_provider_uuid: RP_UUID_1 }, RP_UUID_2: { 'resources': { 'DISK_GB': { 'capacity': 100, 'used': 0, }, 'VCPU': { 'capacity': 4, 'used': 0, } }, # traits shows up from microversion 1.17 'traits': [ 'HW_NIC_OFFLOAD_TSO', 'HW_NIC_OFFLOAD_GRO' ], # parent/root provider uuids show up from microversion 1.29 parent_provider_uuid: null, root_provider_uuid: RP_UUID_2 } } """ include_traits = want_version.matches((1, 17)) include_all_resources = want_version.matches((1, 27)) enable_nested_providers = want_version.matches((1, 29)) ret = {} requested_resources = set() for requested_group in requests.values(): requested_resources |= set(requested_group.resources) # if include_all_resources is false, only requested resources are # included in the provider_summaries. for ps in p_sums: resources = { psr.resource_class: { 'capacity': psr.capacity, 'used': psr.used, } for psr in ps.resources if ( include_all_resources or psr.resource_class in requested_resources) } ret[ps.resource_provider.uuid] = {'resources': resources} if include_traits: ret[ps.resource_provider.uuid]['traits'] = ps.traits if enable_nested_providers: ret[ps.resource_provider.uuid]['parent_provider_uuid'] = ( ps.resource_provider.parent_provider_uuid) ret[ps.resource_provider.uuid]['root_provider_uuid'] = ( ps.resource_provider.root_provider_uuid) return ret def _transform_allocation_candidates(alloc_cands, requests, want_version): """Turn supplied AllocationCandidates object into a dict containing allocation requests and provider summaries. { 'allocation_requests': , 'provider_summaries': , } """ if want_version.matches((1, 12)): a_reqs = _transform_allocation_requests_dict( alloc_cands.allocation_requests, want_version) else: a_reqs = _transform_allocation_requests_list( alloc_cands.allocation_requests) p_sums = _transform_provider_summaries( alloc_cands.provider_summaries, requests, want_version) return { 'allocation_requests': a_reqs, 'provider_summaries': p_sums, } def _get_schema(want_version): """Calculate the desired query parameter schema for list_allocation_candidates. """ for maj, min in _GET_SCHEMA_MICROVERSIONS: if want_version.matches((maj, min)): return getattr(schema, 'GET_SCHEMA_%d_%d' % (maj, min)) return schema.GET_SCHEMA_1_10 @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.10') @util.check_accept('application/json') def list_allocation_candidates(req): """GET a JSON object with a list of allocation requests and a JSON object of provider summary objects On success return a 200 and an application/json body representing a collection of allocation requests and provider summaries """ context = req.environ['placement.context'] context.can(policies.LIST) want_version = req.environ[microversion.MICROVERSION_ENVIRON] get_schema = _get_schema(want_version) util.validate_query_params(req, get_schema) rqparams = lib.RequestWideParams.from_request(req) groups = lib.RequestGroup.dict_from_request(req, rqparams) if not rqparams.group_policy: # group_policy is required if more than one numbered request group was # specified. if len([rg for rg in groups.values() if rg.use_same_provider]) > 1: raise webob.exc.HTTPBadRequest( 'The "group_policy" parameter is required when specifying ' 'more than one "resources{N}" parameter.') # We can't be aware of nested architecture with old microversions nested_aware = want_version.matches((1, 29)) try: cands = ac_obj.AllocationCandidates.get_by_requests( context, groups, rqparams, nested_aware=nested_aware) except exception.ResourceClassNotFound as exc: raise webob.exc.HTTPBadRequest( 'Invalid resource class in resources parameter: %(error)s' % {'error': exc}) except exception.TraitNotFound as exc: raise webob.exc.HTTPBadRequest(str(exc)) response = req.response trx_cands = _transform_allocation_candidates(cands, groups, want_version) json_data = jsonutils.dumps(trx_cands) response.body = encodeutils.to_utf8(json_data) response.content_type = 'application/json' if want_version.matches((1, 15)): response.cache_control = 'no-cache' response.last_modified = timeutils.utcnow(with_timezone=True) return response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/inventory.py0000664000175000017500000004266400000000000024030 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Inventory handlers for Placement API.""" import copy import operator from oslo_db import exception as db_exc from oslo_serialization import jsonutils from oslo_utils import encodeutils import webob from placement.db import constants as db_const from placement import errors from placement import exception from placement import microversion from placement.objects import inventory as inv_obj from placement.objects import resource_provider as rp_obj from placement.policies import inventory as policies from placement.schemas import inventory as schema from placement import util from placement import wsgi_wrapper # NOTE(cdent): We keep our own representation of inventory defaults # and output fields, separate from the versioned object to avoid # inadvertent API changes when the object defaults are changed. OUTPUT_INVENTORY_FIELDS = [ 'total', 'reserved', 'min_unit', 'max_unit', 'step_size', 'allocation_ratio', ] INVENTORY_DEFAULTS = { 'reserved': 0, 'min_unit': 1, 'max_unit': db_const.MAX_INT, 'step_size': 1, 'allocation_ratio': 1.0 } def _extract_inventory(body, schema): """Extract and validate inventory from JSON body.""" data = util.extract_json(body, schema) inventory_data = copy.copy(INVENTORY_DEFAULTS) inventory_data.update(data) return inventory_data def _extract_inventories(body, schema): """Extract and validate multiple inventories from JSON body.""" data = util.extract_json(body, schema) inventories = {} for res_class, raw_inventory in data['inventories'].items(): inventory_data = copy.copy(INVENTORY_DEFAULTS) inventory_data.update(raw_inventory) inventories[res_class] = inventory_data data['inventories'] = inventories return data def make_inventory_object(resource_provider, resource_class, **data): """Single place to catch malformed Inventories.""" # TODO(cdent): Some of the validation checks that are done here # could be done via JSONschema (using, for example, "minimum": # 0) for non-negative integers. It's not clear if that is # duplication or decoupling so leaving it as this for now. try: inventory = inv_obj.Inventory( resource_provider=resource_provider, resource_class=resource_class, **data) except (ValueError, TypeError) as exc: raise webob.exc.HTTPBadRequest( 'Bad inventory %(class)s for resource provider ' '%(rp_uuid)s: %(error)s' % {'class': resource_class, 'rp_uuid': resource_provider.uuid, 'error': exc}) return inventory def _send_inventories(req, resource_provider, inventories): """Send a JSON representation of a list of inventories.""" response = req.response response.status = 200 output, last_modified = _serialize_inventories( inventories, resource_provider.generation) response.body = encodeutils.to_utf8(jsonutils.dumps(output)) response.content_type = 'application/json' want_version = req.environ[microversion.MICROVERSION_ENVIRON] if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response def _send_inventory(req, resource_provider, inventory, status=200): """Send a JSON representation of one single inventory.""" response = req.response response.status = status response.body = encodeutils.to_utf8(jsonutils.dumps(_serialize_inventory( inventory, generation=resource_provider.generation))) response.content_type = 'application/json' want_version = req.environ[microversion.MICROVERSION_ENVIRON] if want_version.matches((1, 15)): modified = util.pick_last_modified(None, inventory) response.last_modified = modified response.cache_control = 'no-cache' return response def _serialize_inventory(inventory, generation=None): """Turn a single inventory into a dictionary.""" data = { field: getattr(inventory, field) for field in OUTPUT_INVENTORY_FIELDS } if generation: data['resource_provider_generation'] = generation return data def _serialize_inventories(inventories, generation): """Turn a list of inventories in a dict by resource class.""" inventories_by_class = {inventory.resource_class: inventory for inventory in inventories} inventories_dict = {} last_modified = None for resource_class, inventory in inventories_by_class.items(): last_modified = util.pick_last_modified(last_modified, inventory) inventories_dict[resource_class] = _serialize_inventory( inventory, generation=None) return ({'resource_provider_generation': generation, 'inventories': inventories_dict}, last_modified) def _validate_inventory_capacity(version, inventories): """Validate inventory capacity. :param version: request microversion. :param inventories: One Inventory or a list of Inventory objects to validate capacities of. :raises: exception.InvalidInventoryCapacityReservedCanBeTotal if request microversion is 1.26 or higher and any inventory has capacity < 0. :raises: exception.InvalidInventoryCapacity if request microversion is lower than 1.26 and any inventory has capacity <= 0. """ if not version.matches((1, 26)): op = operator.le exc_class = exception.InvalidInventoryCapacity else: op = operator.lt exc_class = exception.InvalidInventoryCapacityReservedCanBeTotal if isinstance(inventories, inv_obj.Inventory): inventories = [inventories] for inventory in inventories: if op(inventory.capacity, 0): raise exc_class( resource_class=inventory.resource_class, resource_provider=inventory.resource_provider.uuid) @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def create_inventory(req): """POST to create one inventory. On success return a 201 response, a location header pointing to the newly created inventory and an application/json representation of the inventory. """ context = req.environ['placement.context'] context.can(policies.CREATE) uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) data = _extract_inventory(req.body, schema.POST_INVENTORY_SCHEMA) resource_class = data.pop('resource_class') inventory = make_inventory_object(resource_provider, resource_class, **data) try: _validate_inventory_capacity( req.environ[microversion.MICROVERSION_ENVIRON], inventory) resource_provider.add_inventory(inventory) except (exception.ConcurrentUpdateDetected, db_exc.DBDuplicateEntry) as exc: raise webob.exc.HTTPConflict( 'Update conflict: %(error)s' % {'error': exc}, comment=errors.CONCURRENT_UPDATE) except (exception.InvalidInventoryCapacity, exception.NotFound) as exc: raise webob.exc.HTTPBadRequest( 'Unable to create inventory for resource provider ' '%(rp_uuid)s: %(error)s' % {'rp_uuid': resource_provider.uuid, 'error': exc}) response = req.response response.location = util.inventory_url( req.environ, resource_provider, resource_class) return _send_inventory(req, resource_provider, inventory, status=201) @wsgi_wrapper.PlacementWsgify def delete_inventory(req): """DELETE to destroy a single inventory. If the inventory is in use or resource provider generation is out of sync return a 409. On success return a 204 and an empty body. """ context = req.environ['placement.context'] context.can(policies.DELETE) uuid = util.wsgi_path_item(req.environ, 'uuid') resource_class = util.wsgi_path_item(req.environ, 'resource_class') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) try: resource_provider.delete_inventory(resource_class) except (exception.ConcurrentUpdateDetected, exception.InventoryInUse) as exc: raise webob.exc.HTTPConflict( 'Unable to delete inventory of class %(class)s: %(error)s' % {'class': resource_class, 'error': exc}, comment=errors.CONCURRENT_UPDATE) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( 'No inventory of class %(class)s found for delete: %(error)s' % {'class': resource_class, 'error': exc}) response = req.response response.status = 204 response.content_type = None return response @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def get_inventories(req): """GET a list of inventories. On success return a 200 with an application/json body representing a collection of inventories. """ context = req.environ['placement.context'] context.can(policies.LIST) uuid = util.wsgi_path_item(req.environ, 'uuid') try: rp = rp_obj.ResourceProvider.get_by_uuid(context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( "No resource provider with uuid %(uuid)s found : %(error)s" % {'uuid': uuid, 'error': exc}) inv_list = inv_obj.get_all_by_resource_provider(context, rp) return _send_inventories(req, rp, inv_list) @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def get_inventory(req): """GET one inventory. On success return a 200 an application/json body representing one inventory. """ context = req.environ['placement.context'] context.can(policies.SHOW) uuid = util.wsgi_path_item(req.environ, 'uuid') resource_class = util.wsgi_path_item(req.environ, 'resource_class') try: rp = rp_obj.ResourceProvider.get_by_uuid(context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( "No resource provider with uuid %(uuid)s found : %(error)s" % {'uuid': uuid, 'error': exc}) inv_list = inv_obj.get_all_by_resource_provider(context, rp) inventory = inv_obj.find(inv_list, resource_class) if not inventory: raise webob.exc.HTTPNotFound( 'No inventory of class %(class)s for %(rp_uuid)s' % {'class': resource_class, 'rp_uuid': uuid}) return _send_inventory(req, rp, inventory) @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def set_inventories(req): """PUT to set all inventory for a resource provider. Create, update and delete inventory as required to reset all the inventory. If the resource generation is out of sync, return a 409. If an inventory to be deleted is in use, return a 409. If any inventory to be created or updated has settings which are invalid (for example reserved exceeds capacity), return a 400. On success return a 200 with an application/json body representing the inventories. """ context = req.environ['placement.context'] context.can(policies.UPDATE) uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) data = _extract_inventories(req.body, schema.PUT_INVENTORY_SCHEMA) if data['resource_provider_generation'] != resource_provider.generation: raise webob.exc.HTTPConflict( 'resource provider generation conflict', comment=errors.CONCURRENT_UPDATE) inventories = [] for res_class, inventory_data in data['inventories'].items(): inventory = make_inventory_object( resource_provider, res_class, **inventory_data) inventories.append(inventory) try: _validate_inventory_capacity( req.environ[microversion.MICROVERSION_ENVIRON], inventories) resource_provider.set_inventory(inventories) except exception.ResourceClassNotFound as exc: raise webob.exc.HTTPBadRequest( 'Unknown resource class in inventory for resource provider ' '%(rp_uuid)s: %(error)s' % {'rp_uuid': resource_provider.uuid, 'error': exc}) except exception.InventoryWithResourceClassNotFound as exc: raise webob.exc.HTTPConflict( 'Race condition detected when setting inventory. No inventory ' 'record with resource class for resource provider ' '%(rp_uuid)s: %(error)s' % {'rp_uuid': resource_provider.uuid, 'error': exc}) except (exception.ConcurrentUpdateDetected, db_exc.DBDuplicateEntry) as exc: raise webob.exc.HTTPConflict( 'update conflict: %(error)s' % {'error': exc}, comment=errors.CONCURRENT_UPDATE) except exception.InventoryInUse as exc: raise webob.exc.HTTPConflict( 'update conflict: %(error)s' % {'error': exc}, comment=errors.INVENTORY_INUSE) except exception.InvalidInventoryCapacity as exc: raise webob.exc.HTTPBadRequest( 'Unable to update inventory for resource provider ' '%(rp_uuid)s: %(error)s' % {'rp_uuid': resource_provider.uuid, 'error': exc}) return _send_inventories(req, resource_provider, inventories) @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.5', status_code=405) def delete_inventories(req): """DELETE all inventory for a resource provider. Delete inventory as required to reset all the inventory. If an inventory to be deleted is in use, return a 409 Conflict. On success return a 204 No content. Return 405 Method Not Allowed if the wanted microversion does not match. """ context = req.environ['placement.context'] context.can(policies.DELETE) uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) try: resource_provider.set_inventory([]) except exception.ConcurrentUpdateDetected: raise webob.exc.HTTPConflict( 'Unable to delete inventory for resource provider ' '%(rp_uuid)s because the inventory was updated by ' 'another process. Please retry your request.' % {'rp_uuid': resource_provider.uuid}, comment=errors.CONCURRENT_UPDATE) except exception.InventoryInUse as ex: raise webob.exc.HTTPConflict(ex.format_message(), comment=errors.INVENTORY_INUSE) response = req.response response.status = 204 response.content_type = None return response @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def update_inventory(req): """PUT to update one inventory. If the resource generation is out of sync, return a 409. If the inventory has settings which are invalid (for example reserved exceeds capacity), return a 400. On success return a 200 with an application/json body representing the inventory. """ context = req.environ['placement.context'] context.can(policies.UPDATE) uuid = util.wsgi_path_item(req.environ, 'uuid') resource_class = util.wsgi_path_item(req.environ, 'resource_class') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) data = _extract_inventory(req.body, schema.BASE_INVENTORY_SCHEMA) if data['resource_provider_generation'] != resource_provider.generation: raise webob.exc.HTTPConflict( 'resource provider generation conflict', comment=errors.CONCURRENT_UPDATE) inventory = make_inventory_object(resource_provider, resource_class, **data) try: _validate_inventory_capacity( req.environ[microversion.MICROVERSION_ENVIRON], inventory) resource_provider.update_inventory(inventory) except (exception.ConcurrentUpdateDetected, db_exc.DBDuplicateEntry) as exc: raise webob.exc.HTTPConflict( 'update conflict: %(error)s' % {'error': exc}, comment=errors.CONCURRENT_UPDATE) except exception.InventoryWithResourceClassNotFound as exc: raise webob.exc.HTTPBadRequest( 'No inventory record with resource class for resource provider ' '%(rp_uuid)s: %(error)s' % {'rp_uuid': resource_provider.uuid, 'error': exc}) except exception.InvalidInventoryCapacity as exc: raise webob.exc.HTTPBadRequest( 'Unable to update inventory for resource provider ' '%(rp_uuid)s: %(error)s' % {'rp_uuid': resource_provider.uuid, 'error': exc}) return _send_inventory(req, resource_provider, inventory) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/reshaper.py0000664000175000017500000001464600000000000023603 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handler for the reshaper. The reshaper provides for atomically migrating resource provider inventories and associated allocations when some of the inventory moves from one resource provider to another, such as when a class of inventory moves from a parent provider to a new child provider. """ import copy from oslo_utils import excutils import webob from placement import db_api from placement import errors from placement import exception # TODO(cdent): That we are doing this suggests that there's stuff to be # extracted from the handler to a shared module. from placement.handlers import allocation from placement.handlers import inventory from placement.handlers import util as data_util from placement import microversion from placement.objects import reshaper from placement.objects import resource_provider as rp_obj from placement.policies import reshaper as policies from placement.schemas import reshaper as schema from placement import util from placement import wsgi_wrapper @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.30') @util.require_content('application/json') def reshape(req): context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] context.can(policies.RESHAPE) reshaper_schema = schema.POST_RESHAPER_SCHEMA if want_version.matches((1, 38)): reshaper_schema = schema.POST_RESHAPER_SCHEMA_V1_38 elif want_version.matches((1, 34)): reshaper_schema = schema.POST_RESHAPER_SCHEMA_V1_34 data = util.extract_json(req.body, reshaper_schema) inventories = data['inventories'] allocations = data['allocations'] # We're going to create several lists of Inventory objects, keyed by rp # uuid. inventory_by_rp = {} # TODO(cdent): this has overlaps with inventory:set_inventories # and is a mess of bad names and lack of method extraction. for rp_uuid, inventory_data in inventories.items(): try: resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, rp_uuid) except exception.NotFound as exc: raise webob.exc.HTTPBadRequest( 'Resource provider %(rp_uuid)s in inventories not found: ' '%(error)s' % {'rp_uuid': rp_uuid, 'error': exc}, comment=errors.RESOURCE_PROVIDER_NOT_FOUND) # Do an early generation check. generation = inventory_data['resource_provider_generation'] if generation != resource_provider.generation: raise webob.exc.HTTPConflict( 'resource provider generation conflict for provider %(rp)s: ' 'actual: %(actual)s, given: %(given)s' % {'rp': rp_uuid, 'actual': resource_provider.generation, 'given': generation}, comment=errors.CONCURRENT_UPDATE) inv_list = [] for res_class, raw_inventory in inventory_data['inventories'].items(): inv_data = copy.copy(inventory.INVENTORY_DEFAULTS) inv_data.update(raw_inventory) inv_object = inventory.make_inventory_object( resource_provider, res_class, **inv_data) inv_list.append(inv_object) inventory_by_rp[resource_provider] = inv_list # Make the consumer objects associated with the allocations. consumers, new_consumers_created, requested_attrs = ( allocation.inspect_consumers(context, allocations, want_version)) # When these allocations are created they get resource provider objects # which are different instances (usually with the same data) from those # loaded above when creating inventory objects. The reshape method below # is responsible for ensuring that the resource providers and their # generations do not conflict. allocation_objects = allocation.create_allocation_list( context, allocations, consumers) @db_api.placement_context_manager.writer def _update_consumers_and_create_allocations(ctx): # Update consumer attributes if requested attributes are different. # NOTE(melwitt): This will not raise ConcurrentUpdateDetected, that # happens later in AllocationList.replace_all() data_util.update_consumers(consumers.values(), requested_attrs) reshaper.reshape(ctx, inventory_by_rp, allocation_objects) def _create_allocations(): try: # NOTE(melwitt): Group the consumer and allocation database updates # in a single transaction so that updates get rolled back # automatically in the event of a consumer generation conflict. _update_consumers_and_create_allocations(context) except Exception: with excutils.save_and_reraise_exception(): allocation.delete_consumers(new_consumers_created) try: _create_allocations() # Generation conflict is a (rare) possibility in a few different # places in reshape(). except exception.ConcurrentUpdateDetected as exc: raise webob.exc.HTTPConflict( 'update conflict: %(error)s' % {'error': exc}, comment=errors.CONCURRENT_UPDATE) # A NotFound here means a resource class that does not exist was named except exception.NotFound as exc: raise webob.exc.HTTPBadRequest( 'malformed reshaper data: %(error)s' % {'error': exc}) # Distinguish inventory in use (has allocations on it)... except exception.InventoryInUse as exc: raise webob.exc.HTTPConflict( 'update conflict: %(error)s' % {'error': exc}, comment=errors.INVENTORY_INUSE) # ...from allocations which won't fit for a variety of reasons. except exception.InvalidInventory as exc: raise webob.exc.HTTPConflict( 'Unable to allocate inventory: %(error)s' % {'error': exc}) req.response.status = 204 req.response.content_type = None return req.response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/resource_class.py0000664000175000017500000002055100000000000024776 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for resource classes.""" from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import webob from placement import exception from placement import microversion from placement.objects import resource_class as rc_obj from placement.policies import resource_class as policies from placement.schemas import resource_class as schema from placement import util from placement import wsgi_wrapper def _serialize_links(environ, rc): url = util.resource_class_url(environ, rc) links = [{'rel': 'self', 'href': url}] return links def _serialize_resource_class(environ, rc): data = { 'name': rc.name, 'links': _serialize_links(environ, rc) } return data def _serialize_resource_classes(environ, rcs, want_version): output = [] last_modified = None get_last_modified = want_version.matches((1, 15)) for rc in rcs: if get_last_modified: last_modified = util.pick_last_modified(last_modified, rc) data = _serialize_resource_class(environ, rc) output.append(data) last_modified = last_modified or timeutils.utcnow(with_timezone=True) return ({"resource_classes": output}, last_modified) @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2') @util.require_content('application/json') def create_resource_class(req): """POST to create a resource class. On success return a 201 response with an empty body and a location header pointing to the newly created resource class. """ context = req.environ['placement.context'] context.can(policies.CREATE) data = util.extract_json(req.body, schema.POST_RC_SCHEMA_V1_2) try: rc = rc_obj.ResourceClass(context, name=data['name']) rc.create() except exception.ResourceClassExists: raise webob.exc.HTTPConflict( 'Conflicting resource class already exists: %(name)s' % {'name': data['name']}) except exception.MaxDBRetriesExceeded: raise webob.exc.HTTPConflict( 'Max retries of DB transaction exceeded attempting ' 'to create resource class: %(name)s, please ' 'try again.' % {'name': data['name']}) req.response.location = util.resource_class_url(req.environ, rc) req.response.status = 201 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2') def delete_resource_class(req): """DELETE to destroy a single resource class. On success return a 204 and an empty body. """ name = util.wsgi_path_item(req.environ, 'name') context = req.environ['placement.context'] context.can(policies.DELETE) # The containing application will catch a not found here. rc = rc_obj.ResourceClass.get_by_name(context, name) try: rc.destroy() except exception.ResourceClassCannotDeleteStandard as exc: raise webob.exc.HTTPBadRequest( 'Error in delete resource class: %(error)s' % {'error': exc}) except exception.ResourceClassInUse as exc: raise webob.exc.HTTPConflict( 'Error in delete resource class: %(error)s' % {'error': exc}) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2') @util.check_accept('application/json') def get_resource_class(req): """Get a single resource class. On success return a 200 with an application/json body representing the resource class. """ name = util.wsgi_path_item(req.environ, 'name') context = req.environ['placement.context'] context.can(policies.SHOW) want_version = req.environ[microversion.MICROVERSION_ENVIRON] # The containing application will catch a not found here. rc = rc_obj.ResourceClass.get_by_name(context, name) req.response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_resource_class(req.environ, rc)) ) req.response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' # Non-custom resource classes will return None from pick_last_modified, # so the 'or' causes utcnow to be used. last_modified = util.pick_last_modified(None, rc) or timeutils.utcnow( with_timezone=True) req.response.last_modified = last_modified return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2') @util.check_accept('application/json') def list_resource_classes(req): """GET a list of resource classes. On success return a 200 and an application/json body representing a collection of resource classes. """ context = req.environ['placement.context'] context.can(policies.LIST) want_version = req.environ[microversion.MICROVERSION_ENVIRON] rcs = rc_obj.get_all(context) response = req.response output, last_modified = _serialize_resource_classes( req.environ, rcs, want_version) response.body = encodeutils.to_utf8(jsonutils.dumps(output)) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2', '1.6') @util.require_content('application/json') def update_resource_class(req): """PUT to update a single resource class. On success return a 200 response with a representation of the updated resource class. """ name = util.wsgi_path_item(req.environ, 'name') context = req.environ['placement.context'] context.can(policies.UPDATE) data = util.extract_json(req.body, schema.PUT_RC_SCHEMA_V1_2) # The containing application will catch a not found here. rc = rc_obj.ResourceClass.get_by_name(context, name) rc.name = data['name'] try: rc.save() except exception.ResourceClassExists: raise webob.exc.HTTPConflict( 'Resource class already exists: %(name)s' % {'name': rc.name}) except exception.ResourceClassCannotUpdateStandard: raise webob.exc.HTTPBadRequest( 'Cannot update standard resource class %(rp_name)s' % {'rp_name': name}) req.response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_resource_class(req.environ, rc)) ) req.response.status = 200 req.response.content_type = 'application/json' return req.response @wsgi_wrapper.PlacementWsgify # noqa @microversion.version_handler('1.7') def update_resource_class(req): # noqa """PUT to create or validate the existence of single resource class. On a successful create return 201. Return 204 if the class already exists. If the resource class is not a custom resource class, return a 400. 409 might be a better choice, but 400 aligns with previous code. """ name = util.wsgi_path_item(req.environ, 'name') context = req.environ['placement.context'] context.can(policies.UPDATE) # Use JSON validation to validation resource class name. util.extract_json('{"name": "%s"}' % name, schema.PUT_RC_SCHEMA_V1_2) status = 204 try: rc = rc_obj.ResourceClass.get_by_name(context, name) except exception.NotFound: try: rc = rc_obj.ResourceClass(context, name=name) rc.create() status = 201 # We will not see ResourceClassCannotUpdateStandard because # that was already caught when validating the {name}. except exception.ResourceClassExists: # Someone just now created the class, so stick with 204 pass req.response.status = status req.response.content_type = None req.response.location = util.resource_class_url(req.environ, rc) return req.response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/resource_provider.py0000664000175000017500000003065600000000000025532 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for resource providers.""" import uuid as uuidlib from oslo_db import exception as db_exc from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils from oslo_utils import uuidutils import webob from placement import errors from placement import exception from placement import microversion from placement.objects import resource_provider as rp_obj from placement.policies import resource_provider as policies from placement.schemas import resource_provider as rp_schema from placement import util from placement import wsgi_wrapper def _serialize_links(environ, resource_provider): url = util.resource_provider_url(environ, resource_provider) links = [{'rel': 'self', 'href': url}] rel_types = ['inventories', 'usages'] want_version = environ[microversion.MICROVERSION_ENVIRON] if want_version >= (1, 1): rel_types.append('aggregates') if want_version >= (1, 6): rel_types.append('traits') if want_version >= (1, 11): rel_types.append('allocations') for rel in rel_types: links.append({'rel': rel, 'href': '%s/%s' % (url, rel)}) return links def _serialize_provider(environ, resource_provider, want_version): data = { 'uuid': resource_provider.uuid, 'name': resource_provider.name, 'generation': resource_provider.generation, 'links': _serialize_links(environ, resource_provider) } if want_version.matches((1, 14)): data['parent_provider_uuid'] = resource_provider.parent_provider_uuid data['root_provider_uuid'] = resource_provider.root_provider_uuid return data def _serialize_providers(environ, resource_providers, want_version): output = [] last_modified = None get_last_modified = want_version.matches((1, 15)) for provider in resource_providers: if get_last_modified: last_modified = util.pick_last_modified(last_modified, provider) provider_data = _serialize_provider(environ, provider, want_version) output.append(provider_data) last_modified = last_modified or timeutils.utcnow(with_timezone=True) return {"resource_providers": output}, last_modified @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def create_resource_provider(req): """POST to create a resource provider. On success return a 201 response with an empty body (microversions 1.0 - 1.19) or a 200 response with a payload representing the newly created resource provider (microversions 1.20 - latest), and a location header pointing to the resource provider. """ context = req.environ['placement.context'] context.can(policies.CREATE) schema = rp_schema.POST_RESOURCE_PROVIDER_SCHEMA want_version = req.environ[microversion.MICROVERSION_ENVIRON] if want_version.matches((1, 14)): schema = rp_schema.POST_RP_SCHEMA_V1_14 data = util.extract_json(req.body, schema) try: if data.get('uuid'): # Normalize UUID with no proper dashes into dashed one # with format {8}-{4}-{4}-{4}-{12} data['uuid'] = str(uuidlib.UUID(data['uuid'])) else: data['uuid'] = uuidutils.generate_uuid() resource_provider = rp_obj.ResourceProvider(context, **data) resource_provider.create() except db_exc.DBDuplicateEntry as exc: # Whether exc.columns has one or two entries (in the event # of both fields being duplicates) appears to be database # dependent, so going with the complete solution here. duplicates = [] for column in exc.columns: # For MySQL, this is error 1062: # # Duplicate entry '%s' for key %d # # The 'key' value is captured in 'DBDuplicateEntry.columns'. # Despite the name, this isn't always a column name. While MySQL # 5.x does indeed use the name of the column, 8.x uses the name of # the constraint. oslo.db should probably fix this, but until that # happens we need to handle both cases if column == 'uniq_resource_providers0uuid': duplicates.append(f'uuid: {data["uuid"]}') elif column == 'uniq_resource_providers0name': duplicates.append(f'name: {data["name"]}') else: duplicates.append(f'{column}: {data[column]}') raise webob.exc.HTTPConflict( 'Conflicting resource provider %(duplicate)s already exists.' % {'duplicate': ', '.join(duplicates)}, comment=errors.DUPLICATE_NAME) except exception.ObjectActionError as exc: raise webob.exc.HTTPBadRequest( 'Unable to create resource provider "%(name)s", %(rp_uuid)s: ' '%(error)s' % {'name': data['name'], 'rp_uuid': data['uuid'], 'error': exc}) req.response.location = util.resource_provider_url( req.environ, resource_provider) if want_version.matches(min_version=(1, 20)): req.response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_provider(req.environ, resource_provider, want_version))) req.response.content_type = 'application/json' modified = util.pick_last_modified(None, resource_provider) req.response.last_modified = modified req.response.cache_control = 'no-cache' else: req.response.status = 201 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify def delete_resource_provider(req): """DELETE to destroy a single resource provider. On success return a 204 and an empty body. """ uuid = util.wsgi_path_item(req.environ, 'uuid') context = req.environ['placement.context'] context.can(policies.DELETE) # The containing application will catch a not found here. try: resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) resource_provider.destroy() except exception.ResourceProviderInUse as exc: raise webob.exc.HTTPConflict( 'Unable to delete resource provider %(rp_uuid)s: %(error)s' % {'rp_uuid': uuid, 'error': exc}, comment=errors.PROVIDER_IN_USE) except exception.NotFound: raise webob.exc.HTTPNotFound( "No resource provider with uuid %s found for delete" % uuid) except exception.CannotDeleteParentResourceProvider: raise webob.exc.HTTPConflict( "Unable to delete parent resource provider %(rp_uuid)s: " "It has child resource providers." % {'rp_uuid': uuid}, comment=errors.PROVIDER_CANNOT_DELETE_PARENT) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def get_resource_provider(req): """Get a single resource provider. On success return a 200 with an application/json body representing the resource provider. """ want_version = req.environ[microversion.MICROVERSION_ENVIRON] uuid = util.wsgi_path_item(req.environ, 'uuid') context = req.environ['placement.context'] context.can(policies.SHOW) # The containing application will catch a not found here. resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) response = req.response response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_provider(req.environ, resource_provider, want_version))) response.content_type = 'application/json' if want_version.matches((1, 15)): modified = util.pick_last_modified(None, resource_provider) response.last_modified = modified response.cache_control = 'no-cache' return response @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def list_resource_providers(req): """GET a list of resource providers. On success return a 200 and an application/json body representing a collection of resource providers. """ context = req.environ['placement.context'] context.can(policies.LIST) want_version = req.environ[microversion.MICROVERSION_ENVIRON] schema = rp_schema.GET_RPS_SCHEMA_1_0 if want_version.matches((1, 18)): schema = rp_schema.GET_RPS_SCHEMA_1_18 elif want_version.matches((1, 14)): schema = rp_schema.GET_RPS_SCHEMA_1_14 elif want_version.matches((1, 4)): schema = rp_schema.GET_RPS_SCHEMA_1_4 elif want_version.matches((1, 3)): schema = rp_schema.GET_RPS_SCHEMA_1_3 util.validate_query_params(req, schema) filters = {} # special handling of member_of qparam since we allow multiple member_of # params at microversion 1.24. if 'member_of' in req.GET: filters['member_of'], filters['forbidden_aggs'] = ( util.normalize_member_of_qs_params(req)) if 'required' in req.GET: filters['required_traits'], filters['forbidden_traits'] = ( util.normalize_traits_qs_params(req)) qpkeys = ('uuid', 'name', 'in_tree', 'resources') for attr in qpkeys: if attr in req.GET: value = req.GET[attr] if attr == 'resources': value = util.normalize_resources_qs_param(value) filters[attr] = value try: resource_providers = rp_obj.get_all_by_filters(context, filters) except exception.ResourceClassNotFound as exc: raise webob.exc.HTTPBadRequest( 'Invalid resource class in resources parameter: %(error)s' % {'error': exc}) except exception.TraitNotFound as exc: raise webob.exc.HTTPBadRequest( 'Invalid trait(s) in "required" parameter: %(error)s' % {'error': exc}) response = req.response output, last_modified = _serialize_providers( req.environ, resource_providers, want_version) response.body = encodeutils.to_utf8(jsonutils.dumps(output)) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def update_resource_provider(req): """PUT to update a single resource provider. On success return a 200 response with a representation of the updated resource provider. """ uuid = util.wsgi_path_item(req.environ, 'uuid') context = req.environ['placement.context'] context.can(policies.UPDATE) want_version = req.environ[microversion.MICROVERSION_ENVIRON] # The containing application will catch a not found here. resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) schema = rp_schema.PUT_RESOURCE_PROVIDER_SCHEMA if want_version.matches((1, 14)): schema = rp_schema.PUT_RP_SCHEMA_V1_14 allow_reparenting = want_version.matches((1, 37)) data = util.extract_json(req.body, schema) for field in rp_obj.ResourceProvider.SETTABLE_FIELDS: if field in data: setattr(resource_provider, field, data[field]) try: resource_provider.save(allow_reparenting=allow_reparenting) except db_exc.DBDuplicateEntry: raise webob.exc.HTTPConflict( 'Conflicting resource provider %(name)s already exists.' % {'name': data['name']}, comment=errors.DUPLICATE_NAME) except exception.ObjectActionError as exc: raise webob.exc.HTTPBadRequest( 'Unable to save resource provider %(rp_uuid)s: %(error)s' % {'rp_uuid': uuid, 'error': exc}) response = req.response response.status = 200 response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_provider(req.environ, resource_provider, want_version))) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = resource_provider.updated_at response.cache_control = 'no-cache' return response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/root.py0000664000175000017500000000450000000000000022741 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Handler for the root of the Placement API.""" from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils from placement import microversion from placement import wsgi_wrapper @wsgi_wrapper.PlacementWsgify def home(req): want_version = req.environ[microversion.MICROVERSION_ENVIRON] min_version = microversion.min_version_string() max_version = microversion.max_version_string() # NOTE(cdent): As sections of the api are added, links can be # added to this output to align with the guidelines at # http://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html#version-discovery version_data = { 'id': 'v%s' % min_version, 'max_version': max_version, 'min_version': min_version, # for now there is only ever one version, so it must be CURRENT 'status': 'CURRENT', 'links': [{ # Point back to this same URL as the root of this version. # NOTE(cdent): We explicitly want this to be a relative-URL # representation of "this same URL", otherwise placement needs # to keep track of proxy addresses and the like, which we have # avoided thus far, in order to construct full URLs. Placement # is much easier to scale if we never track that stuff. 'rel': 'self', 'href': '', }], } version_json = jsonutils.dumps({'versions': [version_data]}) req.response.body = encodeutils.to_utf8(version_json) req.response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' req.response.last_modified = timeutils.utcnow(with_timezone=True) return req.response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/trait.py0000664000175000017500000002365000000000000023110 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Traits handlers for Placement API.""" import jsonschema from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import webob from placement import errors from placement import exception from placement import microversion from placement.objects import resource_provider as rp_obj from placement.objects import trait as trait_obj from placement.policies import trait as policies from placement.schemas import trait as schema from placement import util from placement import wsgi_wrapper def _normalize_traits_qs_param(qs): try: op, value = qs.split(':', 1) except ValueError: msg = ('Badly formatted name parameter. Expected name query string ' 'parameter in form: ' '?name=[in|startswith]:[name1,name2|prefix]. Got: "%s"') msg = msg % qs raise webob.exc.HTTPBadRequest(msg) filters = {} if op == 'in': filters['name_in'] = value.split(',') elif op == 'startswith': filters['prefix'] = value return filters def _serialize_traits(traits, want_version): last_modified = None get_last_modified = want_version.matches((1, 15)) trait_names = [] for trait in traits: if get_last_modified: last_modified = util.pick_last_modified(last_modified, trait) trait_names.append(trait.name) # If there were no traits, set last_modified to now last_modified = last_modified or timeutils.utcnow(with_timezone=True) return {'traits': trait_names}, last_modified @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') def put_trait(req): context = req.environ['placement.context'] context.can(policies.TRAITS_UPDATE) want_version = req.environ[microversion.MICROVERSION_ENVIRON] name = util.wsgi_path_item(req.environ, 'name') try: jsonschema.validate(name, schema.CUSTOM_TRAIT) except jsonschema.ValidationError: raise webob.exc.HTTPBadRequest( 'The trait is invalid. A valid trait must be no longer than ' '255 characters, start with the prefix "CUSTOM_" and use ' 'following characters: "A"-"Z", "0"-"9" and "_"') status = 204 try: trait = trait_obj.Trait.get_by_name(context, name) except exception.TraitNotFound: try: trait = trait_obj.Trait(context, name=name) trait.create() status = 201 except exception.TraitExists: # Something just created the trait pass req.response.status = status req.response.content_type = None req.response.location = util.trait_url(req.environ, trait) if want_version.matches((1, 15)): # If the TraitExists exception was hit above, created_at is None # so fall back to now for the last modified header. last_modified = (trait.created_at or timeutils.utcnow(with_timezone=True)) req.response.last_modified = last_modified req.response.cache_control = 'no-cache' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') def get_trait(req): context = req.environ['placement.context'] context.can(policies.TRAITS_SHOW) want_version = req.environ[microversion.MICROVERSION_ENVIRON] name = util.wsgi_path_item(req.environ, 'name') try: trait = trait_obj.Trait.get_by_name(context, name) except exception.TraitNotFound as ex: raise webob.exc.HTTPNotFound(ex.format_message()) req.response.status = 204 req.response.content_type = None if want_version.matches((1, 15)): req.response.last_modified = trait.created_at req.response.cache_control = 'no-cache' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') def delete_trait(req): context = req.environ['placement.context'] context.can(policies.TRAITS_DELETE) name = util.wsgi_path_item(req.environ, 'name') try: trait = trait_obj.Trait.get_by_name(context, name) trait.destroy() except exception.TraitNotFound as ex: raise webob.exc.HTTPNotFound(ex.format_message()) except exception.TraitCannotDeleteStandard as ex: raise webob.exc.HTTPBadRequest(ex.format_message()) except exception.TraitInUse as ex: raise webob.exc.HTTPConflict(ex.format_message()) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') @util.check_accept('application/json') def list_traits(req): context = req.environ['placement.context'] context.can(policies.TRAITS_LIST) want_version = req.environ[microversion.MICROVERSION_ENVIRON] filters = {} util.validate_query_params(req, schema.LIST_TRAIT_SCHEMA) if 'name' in req.GET: filters = _normalize_traits_qs_param(req.GET['name']) if 'associated' in req.GET: if req.GET['associated'].lower() not in ['true', 'false']: raise webob.exc.HTTPBadRequest( 'The query parameter "associated" only accepts ' '"true" or "false"') filters['associated'] = ( True if req.GET['associated'].lower() == 'true' else False) traits = trait_obj.get_all(context, filters) req.response.status = 200 output, last_modified = _serialize_traits(traits, want_version) if want_version.matches((1, 15)): req.response.last_modified = last_modified req.response.cache_control = 'no-cache' req.response.body = encodeutils.to_utf8(jsonutils.dumps(output)) req.response.content_type = 'application/json' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') @util.check_accept('application/json') def list_traits_for_resource_provider(req): context = req.environ['placement.context'] context.can(policies.RP_TRAIT_LIST) want_version = req.environ[microversion.MICROVERSION_ENVIRON] uuid = util.wsgi_path_item(req.environ, 'uuid') # Resource provider object is needed for two things: If it is # NotFound we'll get a 404 here, which needs to happen because # get_all_by_resource_provider can return an empty list. # It is also needed for the generation, used in the outgoing # representation. try: rp = rp_obj.ResourceProvider.get_by_uuid(context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( "No resource provider with uuid %(uuid)s found: %(error)s" % {'uuid': uuid, 'error': exc}) traits = trait_obj.get_all_by_resource_provider(context, rp) response_body, last_modified = _serialize_traits(traits, want_version) response_body["resource_provider_generation"] = rp.generation if want_version.matches((1, 15)): req.response.last_modified = last_modified req.response.cache_control = 'no-cache' req.response.status = 200 req.response.body = encodeutils.to_utf8(jsonutils.dumps(response_body)) req.response.content_type = 'application/json' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') @util.require_content('application/json') def update_traits_for_resource_provider(req): context = req.environ['placement.context'] context.can(policies.RP_TRAIT_UPDATE) want_version = req.environ[microversion.MICROVERSION_ENVIRON] uuid = util.wsgi_path_item(req.environ, 'uuid') data = util.extract_json(req.body, schema.SET_TRAITS_FOR_RP_SCHEMA) rp_gen = data['resource_provider_generation'] traits = data['traits'] resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) if resource_provider.generation != rp_gen: raise webob.exc.HTTPConflict( "Resource provider's generation already changed. Please update " "the generation and try again.", json_formatter=util.json_error_formatter, comment=errors.CONCURRENT_UPDATE) trait_objs = trait_obj.get_all(context, filters={'name_in': traits}) traits_name = set([obj.name for obj in trait_objs]) non_existed_trait = set(traits) - set(traits_name) if non_existed_trait: raise webob.exc.HTTPBadRequest( "No such trait %s" % ', '.join(non_existed_trait)) resource_provider.set_traits(trait_objs) response_body, last_modified = _serialize_traits(trait_objs, want_version) response_body[ 'resource_provider_generation'] = resource_provider.generation if want_version.matches((1, 15)): req.response.last_modified = last_modified req.response.cache_control = 'no-cache' req.response.status = 200 req.response.body = encodeutils.to_utf8(jsonutils.dumps(response_body)) req.response.content_type = 'application/json' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') def delete_traits_for_resource_provider(req): context = req.environ['placement.context'] context.can(policies.RP_TRAIT_DELETE) uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid(context, uuid) try: resource_provider.set_traits([]) except exception.ConcurrentUpdateDetected as e: raise webob.exc.HTTPConflict(e.format_message(), comment=errors.CONCURRENT_UPDATE) req.response.status = 204 req.response.content_type = None return req.response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/usage.py0000664000175000017500000001317000000000000023065 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for usage information.""" import collections from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import webob from placement import exception from placement import microversion from placement.objects import resource_provider as rp_obj from placement.objects import usage as usage_obj from placement.policies import usage as policies from placement.schemas import usage as schema from placement import util from placement import wsgi_wrapper def _serialize_usages(resource_provider, usage): usage_dict = {resource.resource_class: resource.usage for resource in usage} return {'resource_provider_generation': resource_provider.generation, 'usages': usage_dict} @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def list_usages(req): """GET a dictionary of resource provider usage by resource class. If the resource provider does not exist return a 404. On success return a 200 with an application/json representation of the usage dictionary. """ context = req.environ['placement.context'] context.can(policies.PROVIDER_USAGES) uuid = util.wsgi_path_item(req.environ, 'uuid') want_version = req.environ[microversion.MICROVERSION_ENVIRON] # Resource provider object needed for two things: If it is # NotFound we'll get a 404 here, which needs to happen because # get_all_by_resource_provider_uuid can return an empty list. # It is also needed for the generation, used in the outgoing # representation. try: resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( "No resource provider with uuid %(uuid)s found: %(error)s" % {'uuid': uuid, 'error': exc}) usage = usage_obj.get_all_by_resource_provider_uuid(context, uuid) response = req.response response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_usages(resource_provider, usage))) req.response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' # While it would be possible to generate a last-modified time # based on the collection of allocations that result in a usage # value (with some spelunking in the SQL) that doesn't align with # the question that is being asked in a request for usages: What # is the usage, now? So the last-modified time is set to utcnow. req.response.last_modified = timeutils.utcnow(with_timezone=True) return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.9') @util.check_accept('application/json') def get_total_usages(req): """GET the sum of usages for a project or a project/user. On success return a 200 and an application/json body representing the sum/total of usages. Return 404 Not Found if the wanted microversion does not match. """ project_id = req.GET.get('project_id') user_id = req.GET.get('user_id') consumer_type = req.GET.get('consumer_type') context = req.environ['placement.context'] context.can( policies.TOTAL_USAGES, target={'project_id': project_id}) want_version = req.environ[microversion.MICROVERSION_ENVIRON] want_schema = schema.GET_USAGES_SCHEMA_1_9 show_consumer_type = want_version.matches((1, 38)) if show_consumer_type: want_schema = schema.GET_USAGES_SCHEMA_V1_38 util.validate_query_params(req, want_schema) if show_consumer_type: usages = usage_obj.get_by_consumer_type( context, project_id, user_id=user_id, consumer_type=consumer_type) else: usages = usage_obj.get_all_by_project_user(context, project_id, user_id=user_id) response = req.response if show_consumer_type: usage = collections.defaultdict(dict) for resource in usages: ct = resource.consumer_type rc = resource.resource_class cc = resource.consumer_count used = resource.usage usage[ct][rc] = used usage[ct]['consumer_count'] = cc usages_dict = { 'usages': usage } else: usages_dict = {'usages': {resource.resource_class: resource.usage for resource in usages}} response.body = encodeutils.to_utf8(jsonutils.dumps(usages_dict)) req.response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' # While it would be possible to generate a last-modified time # based on the collection of allocations that result in a usage # value (with some spelunking in the SQL) that doesn't align with # the question that is being asked in a request for usages: What # is the usage, now? So the last-modified time is set to utcnow. req.response.last_modified = timeutils.utcnow(with_timezone=True) return req.response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/handlers/util.py0000664000175000017500000002532400000000000022742 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """DB Utility methods for placement.""" import collections from oslo_log import log as logging import webob from placement import errors from placement import exception from placement.objects import consumer as consumer_obj from placement.objects import consumer_type as consumer_type_obj from placement.objects import project as project_obj from placement.objects import user as user_obj LOG = logging.getLogger(__name__) RequestAttr = collections.namedtuple('RequestAttr', ['project', 'user', 'consumer_type_id']) def get_or_create_consumer_type_id(ctx, name): """Tries to fetch the provided consumer_type and creates a new one if it does not exist. :param ctx: The request context. :param name: The name of the consumer type. :returns: The id of the ConsumerType object. """ try: return ctx.ct_cache.id_from_string(name) except exception.ConsumerTypeNotFound: cons_type = consumer_type_obj.ConsumerType(ctx, name=name) try: cons_type.create() return cons_type.id except exception.ConsumerTypeExists: # another thread created concurrently, so try again return get_or_create_consumer_type_id(ctx, name) def _get_or_create_project(ctx, project_id): try: proj = project_obj.Project.get_by_external_id(ctx, project_id) except exception.NotFound: # Auto-create the project if we found no record of it... try: proj = project_obj.Project(ctx, external_id=project_id) proj.create() except exception.ProjectExists: # No worries, another thread created this project already proj = project_obj.Project.get_by_external_id(ctx, project_id) return proj def _get_or_create_user(ctx, user_id): try: user = user_obj.User.get_by_external_id(ctx, user_id) except exception.NotFound: # Auto-create the user if we found no record of it... try: user = user_obj.User(ctx, external_id=user_id) user.create() except exception.UserExists: # No worries, another thread created this user already user = user_obj.User.get_by_external_id(ctx, user_id) return user def _create_consumer(ctx, consumer_uuid, project, user, consumer_type_id): created_new_consumer = False try: consumer = consumer_obj.Consumer( ctx, uuid=consumer_uuid, project=project, user=user, consumer_type_id=consumer_type_id) consumer.create() created_new_consumer = True except exception.ConsumerExists: # Another thread created this consumer already, verify whether # the consumer type matches consumer = consumer_obj.Consumer.get_by_uuid(ctx, consumer_uuid) # If the types don't match, update the consumer record if consumer_type_id != consumer.consumer_type_id: LOG.debug("Supplied consumer type for consumer %s was " "different than existing record. Updating " "consumer record.", consumer_uuid) consumer.consumer_type_id = consumer_type_id consumer.update() return consumer, created_new_consumer def ensure_consumer(ctx, consumer_uuid, project_id, user_id, consumer_generation, consumer_type, want_version): """Ensures there are records in the consumers, projects and users table for the supplied external identifiers. Returns a 3-tuple containing: - the populated Consumer object containing Project and User sub-objects - a boolean indicating whether a new Consumer object was created (as opposed to an existing consumer record retrieved) - a dict of RequestAttr objects by consumer_uuid which contains the requested Project, User, and consumer type ID (which may be different than what is contained in an existing consumer record retrieved) :param ctx: The request context. :param consumer_uuid: The uuid of the consumer of the resources. :param project_id: The external ID of the project consuming the resources. :param user_id: The external ID of the user consuming the resources. :param consumer_generation: The generation provided by the user for this consumer. :param consumer_type: The type of consumer provided by the user. :param want_version: the microversion matcher. :raises webob.exc.HTTPConflict if consumer generation is required and there was a mismatch """ created_new_consumer = False requires_consumer_generation = want_version.matches((1, 28)) requires_consumer_type = want_version.matches((1, 38)) if project_id is None: project_id = ctx.config.placement.incomplete_consumer_project_id user_id = ctx.config.placement.incomplete_consumer_user_id proj = _get_or_create_project(ctx, project_id) user = _get_or_create_user(ctx, user_id) cons_type_id = None try: consumer = consumer_obj.Consumer.get_by_uuid(ctx, consumer_uuid) if requires_consumer_generation: if consumer.generation != consumer_generation: raise webob.exc.HTTPConflict( 'consumer generation conflict - ' 'expected %(expected_gen)s but got %(got_gen)s' % { 'expected_gen': consumer.generation, 'got_gen': consumer_generation, }, comment=errors.CONCURRENT_UPDATE) if requires_consumer_type: cons_type_id = get_or_create_consumer_type_id(ctx, consumer_type) except exception.NotFound: # If we are attempting to modify or create allocations after 1.26, we # need a consumer generation specified. The user must have specified # None for the consumer generation if we get here, since there was no # existing consumer with this UUID and therefore the user should be # indicating that they expect the consumer did not exist. if requires_consumer_generation: if consumer_generation is not None: raise webob.exc.HTTPConflict( 'consumer generation conflict - ' 'expected null but got %s' % consumer_generation, comment=errors.CONCURRENT_UPDATE) if requires_consumer_type: cons_type_id = get_or_create_consumer_type_id(ctx, consumer_type) # No such consumer. This is common for new allocations. Create the # consumer record consumer, created_new_consumer = _create_consumer( ctx, consumer_uuid, proj, user, cons_type_id) # Also return the project, user, and consumer type from the request to use # for rollbacks. request_attr = RequestAttr(proj, user, cons_type_id) return consumer, created_new_consumer, request_attr def update_consumers(consumers, request_attrs): """Update consumers with the requested Project, User, and consumer type ID if they are different. If the supplied project or user external identifiers do not match an existing consumer's project and user identifiers, the existing consumer's project and user IDs are updated to reflect the supplied ones. If the supplied consumer types do not match an existing consumer's consumer type, the existing consumer's consumer types are updated to reflect the supplied ones. :param consumers: a list of Consumer objects :param request_attrs: a dict of RequestAttr objects by consumer_uuid """ for consumer in consumers: request_attr = request_attrs[consumer.uuid] project = request_attr.project user = request_attr.user # Note: this can be None if the request microversion is < 1.38. consumer_type_id = request_attr.consumer_type_id # NOTE(jaypipes): The user may have specified a different project and # user external ID than the one that we had for the consumer. If this # is the case, go ahead and modify the consumer record with the # newly-supplied project/user information, but do not bump the consumer # generation (since it will be bumped in the # AllocationList.replace_all() method). # # TODO(jaypipes): This means that there may be a partial update. # Imagine a scenario where a user calls POST /allocations, and the # payload references two consumers. The first consumer is a new # consumer and is auto-created. The second consumer is an existing # consumer, but contains a different project or user ID than the # existing consumer's record. If the eventual call to # AllocationList.replace_all() fails for whatever reason (say, a # resource provider generation conflict or out of resources failure), # we will end up deleting the auto-created consumer and we will undo # the changes to the second consumer's project and user ID. # # NOTE(melwitt): The aforementioned rollback of changes is predicated # on the fact that the same transaction context is used for both # util.update_consumers() and AllocationList.replace_all() within the # same HTTP request. The @db_api.placement_context_manager.writer # decorator on the outermost method will nest to methods called within # the outermost method. if (project.external_id != consumer.project.external_id or user.external_id != consumer.user.external_id): LOG.debug("Supplied project or user ID for consumer %s was " "different than existing record. Updating consumer " "record.", consumer.uuid) consumer.project = project consumer.user = user consumer.update() # Update the consumer type if it's different than the existing one. if consumer_type_id and consumer_type_id != consumer.consumer_type_id: LOG.debug("Supplied consumer type for consumer %s was " "different than existing record. Updating " "consumer record.", consumer.uuid) consumer.consumer_type_id = consumer_type_id consumer.update() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/lib.py0000664000175000017500000005035100000000000020731 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Symbols intended to be imported by both placement code and placement API consumers. When placement is separated out, this module should be part of a common library that both placement and its consumers can require.""" import re import webob from placement import errors from placement import microversion from placement.schemas import common from placement import util # Querystring-related constants _QS_RESOURCES = 'resources' _QS_REQUIRED = 'required' _QS_MEMBER_OF = 'member_of' _QS_IN_TREE = 'in_tree' _QS_KEY_PATTERN = re.compile( r"^(%s)(%s)?$" % ('|'.join( (_QS_RESOURCES, _QS_REQUIRED, _QS_MEMBER_OF, _QS_IN_TREE)), common.GROUP_PAT)) _QS_KEY_PATTERN_1_33 = re.compile( r"^(%s)(%s)?$" % ('|'.join( (_QS_RESOURCES, _QS_REQUIRED, _QS_MEMBER_OF, _QS_IN_TREE)), common.GROUP_PAT_1_33)) # In newer microversion we no longer check for orphaned member_of # and required because "providers providing no inventory to this # request" are now legit with `same_subtree` queryparam accompanied. SAME_SUBTREE_VERSION = (1, 36) def _fix_one_forbidden(traits): forbidden = [trait for trait in traits if trait.startswith('!')] required = traits - set(forbidden) forbidden = set(trait.lstrip('!') for trait in forbidden) conflicts = forbidden & required return required, forbidden, conflicts class RequestGroup(object): def __init__(self, use_same_provider=True, resources=None, required_traits=None, forbidden_traits=None, member_of=None, in_tree=None, forbidden_aggs=None): """Create a grouping of resource and trait requests. :param use_same_provider: If True, (the default) this RequestGroup represents requests for resources and traits which must be satisfied by a single resource provider. If False, represents a request for resources and traits in any resource provider in the same tree, or a sharing provider. :param resources: A dict of { resource_class: amount, ... } :param required_traits: A list of set of trait names. E.g.: [{T1, T2}, {T3}] means ((T1 or T2) and T3) :param forbidden_traits: A set of { trait_name, ... } :param member_of: A list of [ [aggregate_UUID], [aggregate_UUID, aggregate_UUID] ... ] :param in_tree: A UUID of a root or a non-root provider from whose tree this RequestGroup must be satisfied. """ self.use_same_provider = use_same_provider self.resources = resources or {} self.required_traits = required_traits or [] self.forbidden_traits = forbidden_traits or set() self.member_of = member_of or [] self.in_tree = in_tree self.forbidden_aggs = forbidden_aggs or set() def __str__(self): ret = 'RequestGroup(use_same_provider=%s' % str(self.use_same_provider) ret += ', resources={%s}' % ', '.join( '%s:%d' % (rc, amount) for rc, amount in sorted(list(self.resources.items()))) all_traits = set() fragments = [] for any_traits in self.required_traits: if len(any_traits) == 1: all_traits.add(list(any_traits)[0]) else: fragments.append('(' + ' or '.join(sorted(any_traits)) + ')') if all_traits: fragments.append(', '.join(trait for trait in sorted(all_traits))) if self.forbidden_traits: fragments.append( ', '.join( '!' + trait for trait in sorted(self.forbidden_traits))) ret += ', traits=(%s)' % ' and '.join(fragments) ret += ', aggregates=[%s]' % ', '.join( sorted('[%s]' % ', '.join(sorted(agglist)) for agglist in sorted(self.member_of))) ret += ')' return ret @staticmethod def _parse_request_items(req, verbose_suffix): ret = {} pattern = _QS_KEY_PATTERN_1_33 if verbose_suffix else _QS_KEY_PATTERN for key, val in req.GET.items(): match = pattern.match(key) if not match: continue # `prefix` is 'resources', 'required', 'member_of', or 'in_tree' # `suffix` is a number in microversion < 1.33, a string 1-64 # characters long of [a-zA-Z0-9_-] in microversion >= 1.33, or None prefix, suffix = match.groups() suffix = suffix or '' if suffix not in ret: ret[suffix] = RequestGroup(use_same_provider=bool(suffix)) request_group = ret[suffix] if prefix == _QS_RESOURCES: request_group.resources = util.normalize_resources_qs_param( val) elif prefix == _QS_REQUIRED: ( request_group.required_traits, request_group.forbidden_traits, ) = util.normalize_traits_qs_params(req, suffix) elif prefix == _QS_MEMBER_OF: # special handling of member_of qparam since we allow multiple # member_of params at microversion 1.24. # NOTE(jaypipes): Yes, this is inefficient to do this when # there are multiple member_of query parameters, but we do this # so we can error out if someone passes an "orphaned" member_of # request group. # TODO(jaypipes): Do validation of query parameters using # JSONSchema request_group.member_of, request_group.forbidden_aggs = ( util.normalize_member_of_qs_params(req, suffix)) elif prefix == _QS_IN_TREE: request_group.in_tree = util.normalize_in_tree_qs_params( val) return ret @staticmethod def _check_for_one_resources(by_suffix, resourceless_suffixes): if len(resourceless_suffixes) == len(by_suffix): msg = ('There must be at least one resources or resources[$S] ' 'parameter.') raise webob.exc.HTTPBadRequest( msg, comment=errors.QUERYPARAM_MISSING_VALUE) @staticmethod def _check_resourceless_suffix(subtree_suffixes, resourceless_suffixes): bad_suffixes = [suffix for suffix in resourceless_suffixes if suffix not in subtree_suffixes] if bad_suffixes: msg = ("Resourceless suffixed group request should be specified " "in `same_subtree` query param: bad group(s) - " "%(suffixes)s.") % {'suffixes': bad_suffixes} raise webob.exc.HTTPBadRequest( msg, comment=errors.QUERYPARAM_BAD_VALUE) @staticmethod def _check_actual_suffix(subtree_suffixes, by_suffix): bad_suffixes = [suffix for suffix in subtree_suffixes if suffix not in by_suffix] if bad_suffixes: msg = ("Real suffixes should be specified in `same_subtree`: " "%(bad_suffixes)s not found in %(suffixes)s.") % { 'bad_suffixes': bad_suffixes, 'suffixes': list(by_suffix.keys())} raise webob.exc.HTTPBadRequest( msg, comment=errors.QUERYPARAM_BAD_VALUE) @staticmethod def _check_for_orphans(by_suffix): # Ensure any group with 'required' or 'member_of' also has 'resources'. orphans = [('required%s' % suff) for suff, group in by_suffix.items() if group.required_traits and not group.resources] if orphans: msg = ( 'All traits parameters must be associated with resources. ' 'Found the following orphaned traits keys: %s') raise webob.exc.HTTPBadRequest(msg % ', '.join(orphans)) orphans = [('member_of%s' % suff) for suff, group in by_suffix.items() if not group.resources and ( group.member_of or group.forbidden_aggs)] if orphans: msg = ('All member_of parameters must be associated with ' 'resources. Found the following orphaned member_of ' 'keys: %s') raise webob.exc.HTTPBadRequest(msg % ', '.join(orphans)) # All request groups must have resources (which is almost, but not # quite, verified by the orphan checks above). if not all(grp.resources for grp in by_suffix.values()): msg = "All request groups must specify resources." raise webob.exc.HTTPBadRequest(msg) # The above would still pass if there were no request groups if not by_suffix: msg = ( "At least one request group (`resources` or `resources{$S}`) " "is required.") raise webob.exc.HTTPBadRequest(msg) @staticmethod def _check_forbidden(by_suffix): conflicting_traits = [] for suff, group in by_suffix.items(): for any_traits in group.required_traits: if all( trait in group.forbidden_traits for trait in any_traits ): conflicting_traits.append( 'required%s: (%s)' % (suff, ', '.join(sorted(any_traits)))) if conflicting_traits: msg = ( 'Conflicting required and forbidden traits found in the ' 'following traits keys: %s') # TODO(efried): comment=errors.QUERYPARAM_BAD_VALUE raise webob.exc.HTTPBadRequest( msg % ', '.join(sorted(conflicting_traits))) @classmethod def dict_from_request(cls, req, rqparams): """Parse suffixed resources, traits, and member_of groupings out of a querystring dict found in a webob Request. The input req contains a query string of the form: ?resources=$RESOURCE_CLASS_NAME:$AMOUNT,$RESOURCE_CLASS_NAME:$AMOUNT &required=$TRAIT_NAME,$TRAIT_NAME&member_of=in:$AGG1_UUID,$AGG2_UUID &in_tree=$RP_UUID &resources1=$RESOURCE_CLASS_NAME:$AMOUNT,RESOURCE_CLASS_NAME:$AMOUNT &required1=$TRAIT_NAME,$TRAIT_NAME&member_of1=$AGG_UUID &resources2=$RESOURCE_CLASS_NAME:$AMOUNT,RESOURCE_CLASS_NAME:$AMOUNT &required2=$TRAIT_NAME,$TRAIT_NAME&member_of2=$AGG_UUID &required2=in:$TRAIT_NAME,$TRAIT_NAME These are parsed in groups according to the arbitrary suffix of the key. For each group, a RequestGroup instance is created containing that group's resources, required traits, and member_of. For the (single) group with no suffix, the RequestGroup.use_same_provider attribute is False; for the granular groups it is True. If a trait in the required parameter is prefixed with ``!`` this indicates that that trait must not be present on the resource providers in the group. That is, the trait is forbidden. Forbidden traits are processed only if the microversion supports. If the value of a `required*` is prefixed with 'in:' then the traits in the value are ORred together. The return is a dict, keyed by the suffix of these RequestGroup instances (or the empty string for the unidentified group). As an example, if qsdict represents the query string: ?resources=VCPU:2,MEMORY_MB:1024,DISK_GB=50 &required=HW_CPU_X86_VMX,CUSTOM_STORAGE_RAID &member_of=9323b2b1-82c9-4e91-bdff-e95e808ef954 &member_of=in:8592a199-7d73-4465-8df6-ab00a6243c82,ddbd9226-d6a6-475e-a85f-0609914dd058 # noqa &in_tree=b9fc9abb-afc2-44d7-9722-19afc977446a &resources1=SRIOV_NET_VF:2 &required1=CUSTOM_PHYSNET_PUBLIC,CUSTOM_SWITCH_A &resources2=SRIOV_NET_VF:1 &required2=!CUSTOM_PHYSNET_PUBLIC &required2=CUSTOM_GOLD &required2=in:CUSTOM_FOO,CUSTOM_BAR ...the return value will be: { '': RequestGroup( use_same_provider=False, resources={ "VCPU": 2, "MEMORY_MB": 1024, "DISK_GB" 50, }, required_traits=[ {"HW_CPU_X86_VMX"}, {"CUSTOM_STORAGE_RAID"}, ], member_of=[ [9323b2b1-82c9-4e91-bdff-e95e808ef954], [8592a199-7d73-4465-8df6-ab00a6243c82, ddbd9226-d6a6-475e-a85f-0609914dd058], ], in_tree=b9fc9abb-afc2-44d7-9722-19afc977446a, ), '1': RequestGroup( use_same_provider=True, resources={ "SRIOV_NET_VF": 2, }, required_traits=[ {"CUSTOM_PHYSNET_PUBLIC"}, {"CUSTOM_SWITCH_A"}, ], ), '2': RequestGroup( use_same_provider=True, resources={ "SRIOV_NET_VF": 1, }, required_traits=[ {"CUSTOM_GOLD"}, {"CUSTOM_FOO", "CUSTOM_BAR"}, forbidden_traits=[ "CUSTOM_PHYSNET_PUBLIC", ], ), } :param req: webob.Request object :param rqparams: RequestWideParams object :return: A dict, keyed by suffix, of RequestGroup instances. :raises `webob.exc.HTTPBadRequest` if any value is malformed, or if the suffix of a resourceless request is not in the `rqparams.same_subtrees`. """ want_version = req.environ[microversion.MICROVERSION_ENVIRON] # Control whether we handle forbidden traits. allow_forbidden = want_version.matches((1, 22)) # Control whether we want verbose suffixes verbose_suffix = want_version.matches((1, 33)) # dict of the form: { suffix: RequestGroup } to be returned by_suffix = cls._parse_request_items(req, verbose_suffix) if want_version.matches(SAME_SUBTREE_VERSION): resourceless_suffixes = set( suffix for suffix, grp in by_suffix.items() if not grp.resources) subtree_suffixes = set().union(*rqparams.same_subtrees) cls._check_for_one_resources(by_suffix, resourceless_suffixes) cls._check_resourceless_suffix( subtree_suffixes, resourceless_suffixes) cls._check_actual_suffix(subtree_suffixes, by_suffix) else: cls._check_for_orphans(by_suffix) # check conflicting traits in the request if allow_forbidden: cls._check_forbidden(by_suffix) return by_suffix class RequestWideParams(object): """GET /allocation_candidates params that apply to the request as a whole. This is in contrast with individual request groups (list of RequestGroup above). """ def __init__(self, limit=None, group_policy=None, anchor_required_traits=None, anchor_forbidden_traits=None, same_subtrees=None): """Create a RequestWideParams. :param limit: An integer, N, representing the maximum number of allocation candidates to return. If CONF.placement.randomize_allocation_candidates is True this will be a random sampling of N of the available results. If False then the first N results, in whatever order the database picked them, will be returned. In either case if there are fewer than N total results, all the results will be returned. :param group_policy: String indicating how RequestGroups with use_same_provider=True should interact with each other. If the value is "isolate", we will filter out allocation requests where any such RequestGroups are satisfied by the same RP. :param anchor_required_traits: Set of trait names which the anchor of each returned allocation candidate must possess, regardless of any RequestGroup filters. Note that anchor_required_traits does not support the any-trait format the RequestGroup.required_traits does. :param anchor_forbidden_traits: Set of trait names which the anchor of each returned allocation candidate must NOT possess, regardless of any RequestGroup filters. :param same_subtrees: A list of sets of request group suffix strings where each set of strings represents the suffixes from one same_subtree query param. If provided, all of the resource providers satisfying the specified request groups must be rooted at one of the resource providers satisfying the request groups. """ self.limit = limit self.group_policy = group_policy self.anchor_required_traits = anchor_required_traits self.anchor_forbidden_traits = anchor_forbidden_traits self.same_subtrees = same_subtrees or [] @classmethod def from_request(cls, req): # TODO(efried): Make it an error to specify limit more than once - # maybe when we make group_policy optional. limit = req.GET.getall('limit') # JSONschema has already confirmed that limit has the form # of an integer. if limit: limit = int(limit[0]) # TODO(efried): Make it an error to specify group_policy more than once # - maybe when we make it optional. group_policy = req.GET.getall('group_policy') or None # Schema ensures we get either "none" or "isolate" if group_policy: group_policy = group_policy[0] anchor_required_traits = None anchor_forbidden_traits = None root_required = req.GET.getall('root_required') if root_required: if len(root_required) > 1: raise webob.exc.HTTPBadRequest( "Query parameter 'root_required' may be specified only " "once.", comment=errors.ILLEGAL_DUPLICATE_QUERYPARAM) # NOTE(gibi): root_required does not support any-traits so here # we continue using the old query parsing function that does not # accept the `in:` prefix and that always returns a flat trait # list anchor_required_traits, anchor_forbidden_traits, conflicts = ( _fix_one_forbidden( util.normalize_traits_qs_param_to_legacy_value( root_required[0], allow_forbidden=True))) if conflicts: raise webob.exc.HTTPBadRequest( 'Conflicting required and forbidden traits found in ' 'root_required: %s' % ', '.join(conflicts), comment=errors.QUERYPARAM_BAD_VALUE) same_subtree = req.GET.getall('same_subtree') # Construct a list of sets of request group suffixes strings. same_subtrees = [] if same_subtree: for val in same_subtree: suffixes = set(substr.strip() for substr in val.split(',')) if '' in suffixes: raise webob.exc.HTTPBadRequest( 'Empty string (unsuffixed group) can not be specified ' 'in `same_subtree` ', comment=errors.QUERYPARAM_BAD_VALUE) same_subtrees.append(suffixes) return cls( limit=limit, group_policy=group_policy, anchor_required_traits=anchor_required_traits, anchor_forbidden_traits=anchor_forbidden_traits, same_subtrees=same_subtrees) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/microversion.py0000664000175000017500000002073500000000000022705 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Microversion handling.""" # NOTE(cdent): This code is taken from enamel: # https://github.com/jaypipes/enamel and was the original source of # the code now used in microversion_parse library. import collections import inspect import microversion_parse import webob SERVICE_TYPE = 'placement' MICROVERSION_ENVIRON = '%s.microversion' % SERVICE_TYPE VERSIONED_METHODS = collections.defaultdict(list) # The Canonical Version List VERSIONS = [ '1.0', '1.1', # initial support for aggregate.get_aggregates and set_aggregates '1.2', # Adds /resource_classes resource endpoint '1.3', # Adds 'member_of' query parameter to get resource providers # that are members of any of the listed aggregates '1.4', # Adds resources query string parameter in GET /resource_providers '1.5', # Adds DELETE /resource_providers/{uuid}/inventories '1.6', # Adds /traits and /resource_providers{uuid}/traits resource # endpoints '1.7', # PUT /resource_classes/{name} is bodiless create or update '1.8', # Adds 'project_id' and 'user_id' required request parameters to # PUT /allocations '1.9', # Adds GET /usages '1.10', # Adds GET /allocation_candidates resource endpoint '1.11', # Adds 'allocations' link to the GET /resource_providers response '1.12', # Add project_id and user_id to GET /allocations/{consumer_uuid} # and PUT to /allocations/{consumer_uuid} in the same dict form # as GET. The 'allocation_requests' format in GET # /allocation_candidates is updated to be the same as well. '1.13', # Adds POST /allocations to set allocations for multiple consumers '1.14', # Adds parent and root provider UUID on resource provider # representation and 'in_tree' filter on GET /resource_providers '1.15', # Include last-modified and cache-control headers '1.16', # Add 'limit' query parameter to GET /allocation_candidates '1.17', # Add 'required' query parameter to GET /allocation_candidates and # return traits in the provider summary. '1.18', # Support ?required= queryparam on GET /resource_providers '1.19', # Include generation and conflict detection in provider aggregates # APIs '1.20', # Return 200 with provider payload from POST /resource_providers '1.21', # Support ?member_of=in: queryparam on # GET /allocation_candidates '1.22', # Support forbidden traits in the required parameter of # GET /resource_providers and GET /allocation_candidates '1.23', # Add support for error codes in error response JSON '1.24', # Support multiple ?member_of= queryparams on # GET /resource_providers '1.25', # Adds support for granular resource requests via numbered # querystring groups in GET /allocation_candidates '1.26', # Add ability to specify inventory with reserved value equal to # total. '1.27', # Include all resource class inventories in `provider_summaries` # field in response of `GET /allocation_candidates` API even if # the resource class is not in the requested resources. '1.28', # Add support for consumer generation '1.29', # Support nested providers in GET /allocation_candidates API. '1.30', # Add POST /reshaper for atomically migrating resource provider # inventories and allocations. '1.31', # Add in_tree and in_tree queryparam on # `GET /allocation_candidates` API '1.32', # Support negative member_of queryparams on # `GET /resource_providers` and `GET /allocation_candidates` '1.33', # Support granular resource requests with suffixes that match # [A-Za-z0-9_-]{1,64}. '1.34', # Include a mappings key in allocation requests that shows which # resource providers satisfied which request group suffix. '1.35', # Add a `root_required` queryparam on `GET /allocation_candidates` '1.36', # Add a `same_subtree` parameter on GET /allocation_candidates # and allow resourceless requests for groups in `same_subtree`. '1.37', # Allow re-parenting and un-parenting resource providers '1.38', # Adds ``consumer_type`` (required) key in the request body of # ``POST /allocations``, ``PUT /allocations/{consumer_uuid}`` # and in the response of ``GET /allocations/{consumer_uuid}``. # ``GET /usages`` request will also gain ``consumer_type`` key as # an optional queryparam to filter usages based on consumer_types. # ``GET /usages`` response will group results based on the # consumer type and will include a new ``consumer_count`` key per # type irrespective of whether the ``consumer_type`` was specified # in the request. The corresponding changes to ``/reshaper`` are # included. '1.39', # Adds support for the ``in:`` syntax in the ``required`` query # parameter in the ``GET /resource_providers`` API as well as to # the ``required`` and ``requiredN`` query params of the # ``GET /allocation_candidates`` API. ] def max_version_string(): return VERSIONS[-1] def min_version_string(): return VERSIONS[0] # Based on code in twisted # https://github.com/twisted/twisted/blob/trunk/twisted/python/deprecate.py def _fully_qualified_name(handler): """Return the name of a function used as an HTTP API handler, qualified by module name. """ if inspect.isfunction(handler): module_name = handler.__module__ return "%s.%s" % (module_name, handler.__name__) # We got a class, object method, or module. This is a coding error. raise TypeError("_fully_qualified_name received bad handler type. " "Module-level function required.") def _find_method(qualified_name, version, status_code): """Look in VERSIONED_METHODS for method with right name matching version. If no match is found a HTTPError corresponding to status_code will be returned. """ # A KeyError shouldn't be possible here, but let's be robust # just in case. method_list = VERSIONED_METHODS.get(qualified_name, []) for min_version, max_version, func in method_list: if min_version <= version <= max_version: return func raise webob.exc.status_map[status_code] def version_handler(min_ver, max_ver=None, status_code=404): """Decorator for versioning API methods. Add as a decorator to a placement API handler to constrain the microversions at which it will run. Add after the ``wsgify`` decorator. This does not check for version intersections. That's the domain of tests. :param min_ver: A string of two numerals, X.Y indicating the minimum version allowed for the decorated method. :param max_ver: A string of two numerals, X.Y, indicating the maximum version allowed for the decorated method. :param status_code: A status code to indicate error, 404 by default """ def decorator(f): min_version = microversion_parse.parse_version_string(min_ver) if max_ver: max_version = microversion_parse.parse_version_string(max_ver) else: max_version = microversion_parse.parse_version_string( max_version_string()) qualified_name = _fully_qualified_name(f) VERSIONED_METHODS[qualified_name].append( (min_version, max_version, f)) def decorated_func(req, *args, **kwargs): version = req.environ[MICROVERSION_ENVIRON] return _find_method( qualified_name, version, status_code)(req, *args, **kwargs) # Sort highest min version to beginning of list. VERSIONED_METHODS[qualified_name].sort(key=lambda x: x[0], reverse=True) return decorated_func return decorator ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2527778 openstack_placement-13.0.0/placement/objects/0000775000175000017500000000000000000000000021236 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/__init__.py0000664000175000017500000000000000000000000023335 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/allocation.py0000664000175000017500000005564000000000000023747 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from oslo_db import api as oslo_db_api from oslo_log import log as logging import sqlalchemy as sa from sqlalchemy import sql from placement.db.sqlalchemy import models from placement import db_api from placement import exception from placement.objects import consumer as consumer_obj from placement.objects import project as project_obj from placement.objects import resource_provider as rp_obj from placement.objects import user as user_obj _ALLOC_TBL = models.Allocation.__table__ _CONSUMER_TBL = models.Consumer.__table__ _INV_TBL = models.Inventory.__table__ _PROJECT_TBL = models.Project.__table__ _RP_TBL = models.ResourceProvider.__table__ _USER_TBL = models.User.__table__ LOG = logging.getLogger(__name__) class Allocation(object): def __init__(self, id=None, resource_provider=None, consumer=None, resource_class=None, used=0, updated_at=None, created_at=None): self.id = id self.resource_provider = resource_provider self.resource_class = resource_class self.consumer = consumer self.used = used self.updated_at = updated_at self.created_at = created_at @db_api.placement_context_manager.writer def _delete_allocations_for_consumer(ctx, consumer_id): """Deletes any existing allocations that correspond to the allocations to be written. This is wrapped in a transaction, so if the write subsequently fails, the deletion will also be rolled back. """ del_sql = _ALLOC_TBL.delete().where( _ALLOC_TBL.c.consumer_id == consumer_id) ctx.session.execute(del_sql) @db_api.placement_context_manager.writer def _delete_allocations_by_ids(ctx, alloc_ids): """Deletes allocations having an internal id value in the set of supplied IDs """ del_sql = _ALLOC_TBL.delete().where(_ALLOC_TBL.c.id.in_(alloc_ids)) ctx.session.execute(del_sql) def _check_capacity_exceeded(ctx, allocs): """Checks to see if the supplied allocation records would result in any of the inventories involved having their capacity exceeded. Raises an InvalidAllocationCapacityExceeded exception if any inventory would be exhausted by the allocation. Raises an InvalidAllocationConstraintsViolated exception if any of the `step_size`, `min_unit` or `max_unit` constraints in an inventory will be violated by any one of the allocations. If no inventories would be exceeded or violated by the allocations, the function returns a list of `ResourceProvider` objects that contain the generation at the time of the check. :param ctx: `placement.context.RequestContext` that has an oslo_db Session :param allocs: List of `Allocation` objects to check """ # The SQL generated below looks like this: # SELECT # rp.id, # rp.uuid, # rp.generation, # inv.resource_class_id, # inv.total, # inv.reserved, # inv.allocation_ratio, # allocs.used # FROM resource_providers AS rp # JOIN inventories AS i1 # ON rp.id = i1.resource_provider_id # LEFT JOIN ( # SELECT resource_provider_id, resource_class_id, SUM(used) AS used # FROM allocations # WHERE resource_class_id IN ($RESOURCE_CLASSES) # AND resource_provider_id IN ($RESOURCE_PROVIDERS) # GROUP BY resource_provider_id, resource_class_id # ) AS allocs # ON inv.resource_provider_id = allocs.resource_provider_id # AND inv.resource_class_id = allocs.resource_class_id # WHERE rp.id IN ($RESOURCE_PROVIDERS) # AND inv.resource_class_id IN ($RESOURCE_CLASSES) # # We then take the results of the above and determine if any of the # inventory will have its capacity exceeded. rc_ids = set([ctx.rc_cache.id_from_string(a.resource_class) for a in allocs]) provider_uuids = set([a.resource_provider.uuid for a in allocs]) provider_ids = set([a.resource_provider.id for a in allocs]) usage = sa.select( _ALLOC_TBL.c.resource_provider_id, _ALLOC_TBL.c.resource_class_id, sql.func.sum(_ALLOC_TBL.c.used).label('used'), ) usage = usage.where( sa.and_(_ALLOC_TBL.c.resource_class_id.in_(rc_ids), _ALLOC_TBL.c.resource_provider_id.in_(provider_ids))) usage = usage.group_by(_ALLOC_TBL.c.resource_provider_id, _ALLOC_TBL.c.resource_class_id) usage = usage.subquery(name='usage') inv_join = sql.join( _RP_TBL, _INV_TBL, sql.and_(_RP_TBL.c.id == _INV_TBL.c.resource_provider_id, _INV_TBL.c.resource_class_id.in_(rc_ids))) primary_join = sql.outerjoin( inv_join, usage, sql.and_( _INV_TBL.c.resource_provider_id == usage.c.resource_provider_id, _INV_TBL.c.resource_class_id == usage.c.resource_class_id) ) sel = sa.select( _RP_TBL.c.id.label('resource_provider_id'), _RP_TBL.c.uuid, _RP_TBL.c.generation, _INV_TBL.c.resource_class_id, _INV_TBL.c.total, _INV_TBL.c.reserved, _INV_TBL.c.allocation_ratio, _INV_TBL.c.min_unit, _INV_TBL.c.max_unit, _INV_TBL.c.step_size, usage.c.used, ).select_from(primary_join) sel = sel.where( sa.and_(_RP_TBL.c.id.in_(provider_ids), _INV_TBL.c.resource_class_id.in_(rc_ids))) records = ctx.session.execute(sel) # Create a map keyed by (rp_uuid, res_class) for the records in the DB usage_map = {} provs_with_inv = set() for record in records: map_key = (record.uuid, record.resource_class_id) if map_key in usage_map: raise KeyError("%s already in usage_map, bad query" % str(map_key)) usage_map[map_key] = record provs_with_inv.add(record.uuid) # Ensure that all providers have existing inventory missing_provs = provider_uuids - provs_with_inv if missing_provs: class_str = ', '.join([ctx.rc_cache.string_from_id(rc_id) for rc_id in rc_ids]) provider_str = ', '.join(missing_provs) raise exception.InvalidInventory( resource_class=class_str, resource_provider=provider_str) res_providers = {} rp_resource_class_sum = collections.defaultdict( lambda: collections.defaultdict(int)) for alloc in allocs: rc_id = ctx.rc_cache.id_from_string(alloc.resource_class) rp_uuid = alloc.resource_provider.uuid if rp_uuid not in res_providers: res_providers[rp_uuid] = alloc.resource_provider amount_needed = alloc.used rp_resource_class_sum[rp_uuid][rc_id] += amount_needed # No use checking usage if we're not asking for anything if amount_needed == 0: continue key = (rp_uuid, rc_id) try: usage = usage_map[key] except KeyError: # The resource class at rc_id is not in the usage map. raise exception.InvalidInventory( resource_class=alloc.resource_class, resource_provider=rp_uuid) allocation_ratio = usage.allocation_ratio min_unit = usage.min_unit max_unit = usage.max_unit step_size = usage.step_size # check min_unit, max_unit, step_size if (amount_needed < min_unit or amount_needed > max_unit or amount_needed % step_size != 0): LOG.warning( "Allocation for %(rc)s on resource provider %(rp)s " "violates min_unit, max_unit, or step_size. " "Requested: %(requested)s, min_unit: %(min_unit)s, " "max_unit: %(max_unit)s, step_size: %(step_size)s", {'rc': alloc.resource_class, 'rp': rp_uuid, 'requested': amount_needed, 'min_unit': min_unit, 'max_unit': max_unit, 'step_size': step_size}) raise exception.InvalidAllocationConstraintsViolated( resource_class=alloc.resource_class, resource_provider=rp_uuid) # usage.used can be returned as None used = usage.used or 0 capacity = (usage.total - usage.reserved) * allocation_ratio if (capacity < (used + amount_needed) or capacity < (used + rp_resource_class_sum[rp_uuid][rc_id])): LOG.warning( "Over capacity for %(rc)s on resource provider %(rp)s. " "Needed: %(needed)s, Used: %(used)s, Capacity: %(cap)s", {'rc': alloc.resource_class, 'rp': rp_uuid, 'needed': amount_needed, 'used': used, 'cap': capacity}) raise exception.InvalidAllocationCapacityExceeded( resource_class=alloc.resource_class, resource_provider=rp_uuid) return res_providers @db_api.placement_context_manager.reader def _get_allocations_by_provider_id(ctx, rp_id): allocs = sa.alias(_ALLOC_TBL, name="a") consumers = sa.alias(_CONSUMER_TBL, name="c") projects = sa.alias(_PROJECT_TBL, name="p") users = sa.alias(_USER_TBL, name="u") # TODO(jaypipes): change this join to be on ID not UUID consumers_join = sa.join( allocs, consumers, allocs.c.consumer_id == consumers.c.uuid) projects_join = sa.join( consumers_join, projects, consumers.c.project_id == projects.c.id) users_join = sa.join( projects_join, users, consumers.c.user_id == users.c.id) sel = sa.select( allocs.c.id, allocs.c.resource_class_id, allocs.c.used, allocs.c.updated_at, allocs.c.created_at, consumers.c.id.label("consumer_id"), consumers.c.generation.label("consumer_generation"), consumers.c.uuid.label("consumer_uuid"), projects.c.id.label("project_id"), projects.c.external_id.label("project_external_id"), users.c.id.label("user_id"), users.c.external_id.label("user_external_id"), ).select_from(users_join) sel = sel.where(allocs.c.resource_provider_id == rp_id) return [dict(r._mapping) for r in ctx.session.execute(sel)] @db_api.placement_context_manager.reader def _get_allocations_by_consumer_uuid(ctx, consumer_uuid): allocs = sa.alias(_ALLOC_TBL, name="a") rp = sa.alias(_RP_TBL, name="rp") consumer = sa.alias(_CONSUMER_TBL, name="c") project = sa.alias(_PROJECT_TBL, name="p") user = sa.alias(_USER_TBL, name="u") # Build up the joins of the five tables we need to interact with. rp_join = sa.join(allocs, rp, allocs.c.resource_provider_id == rp.c.id) consumer_join = sa.join( rp_join, consumer, allocs.c.consumer_id == consumer.c.uuid) project_join = sa.join( consumer_join, project, consumer.c.project_id == project.c.id) user_join = sa.join( project_join, user, consumer.c.user_id == user.c.id) sel = sa.select( allocs.c.id, allocs.c.resource_provider_id, rp.c.name.label("resource_provider_name"), rp.c.uuid.label("resource_provider_uuid"), rp.c.generation.label("resource_provider_generation"), allocs.c.resource_class_id, allocs.c.used, consumer.c.id.label("consumer_id"), consumer.c.generation.label("consumer_generation"), consumer.c.consumer_type_id, consumer.c.uuid.label("consumer_uuid"), project.c.id.label("project_id"), project.c.external_id.label("project_external_id"), user.c.id.label("user_id"), user.c.external_id.label("user_external_id"), allocs.c.created_at, allocs.c.updated_at, ).select_from(user_join) sel = sel.where(allocs.c.consumer_id == consumer_uuid) return [dict(r._mapping) for r in ctx.session.execute(sel)] @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @db_api.placement_context_manager.writer def _set_allocations(context, allocs): """Write a set of allocations. We must check that there is capacity for each allocation. If there is not we roll back the entire set. :raises `exception.ResourceClassNotFound` if any resource class in any allocation in allocs cannot be found in either the DB. :raises `exception.InvalidAllocationCapacityExceeded` if any inventory would be exhausted by the allocation. :raises `InvalidAllocationConstraintsViolated` if any of the `step_size`, `min_unit` or `max_unit` constraints in an inventory will be violated by any one of the allocations. :raises `ConcurrentUpdateDetected` if a generation for a resource provider or consumer failed its increment check. """ # First delete any existing allocations for any consumers. This # provides a clean slate for the consumers mentioned in the list of # allocations being manipulated. consumer_ids = set(alloc.consumer.uuid for alloc in allocs) for consumer_id in consumer_ids: _delete_allocations_for_consumer(context, consumer_id) # Before writing any allocation records, we check that the submitted # allocations do not cause any inventory capacity to be exceeded for # any resource provider and resource class involved in the allocation # transaction. _check_capacity_exceeded() raises an exception if any # inventory capacity is exceeded. If capacity is not exceeded, the # function returns a list of ResourceProvider objects containing the # generation of the resource provider at the time of the check. These # objects are used at the end of the allocation transaction as a guard # against concurrent updates. # # Don't check capacity when alloc.used is zero. Zero is not a valid # amount when making an allocation (the minimum consumption of a # resource is one) but is used in this method to indicate a need for # removal. Providing 0 is controlled at the HTTP API layer where PUT # /allocations does not allow empty allocations. When POST /allocations # is implemented it will for the special case of atomically setting and # removing different allocations in the same request. # _check_capacity_exceeded will raise a ResourceClassNotFound # if any # allocation is using a resource class that does not exist. visited_consumers = {} visited_rps = _check_capacity_exceeded(context, allocs) for alloc in allocs: if alloc.consumer.id not in visited_consumers: visited_consumers[alloc.consumer.id] = alloc.consumer # If alloc.used is set to zero that is a signal that we don't want # to (re-)create any allocations for this resource class. # _delete_current_allocs has already wiped out allocations so just # continue if alloc.used == 0: continue consumer_id = alloc.consumer.uuid rp = alloc.resource_provider rc_id = context.rc_cache.id_from_string(alloc.resource_class) ins_stmt = _ALLOC_TBL.insert().values( resource_provider_id=rp.id, resource_class_id=rc_id, consumer_id=consumer_id, used=alloc.used) res = context.session.execute(ins_stmt) alloc.id = res.lastrowid # Generation checking happens here. If the inventory for this resource # provider changed out from under us, this will raise a # ConcurrentUpdateDetected which can be caught by the caller to choose # to try again. It will also rollback the transaction so that these # changes always happen atomically. for rp in visited_rps.values(): rp.increment_generation() for consumer in visited_consumers.values(): consumer.increment_generation() # If any consumers involved in this transaction ended up having no # allocations, delete the consumer records. Exclude consumers that had # *some resource* in the allocation list with a total > 0 since clearly # those consumers have allocations... cons_with_allocs = set(a.consumer.uuid for a in allocs if a.used > 0) all_cons = set(c.uuid for c in visited_consumers.values()) consumers_to_check = all_cons - cons_with_allocs consumer_obj.delete_consumers_if_no_allocations( context, consumers_to_check) def get_all_by_resource_provider(context, rp): db_allocs = _get_allocations_by_provider_id(context, rp.id) # Build up a list of Allocation objects, setting the Allocation object # fields to the same-named database record field we got from # _get_allocations_by_provider_id(). We already have the # ResourceProvider object so we just pass that object to the Allocation # object constructor as-is objs = [] for rec in db_allocs: consumer = consumer_obj.Consumer( context, id=rec['consumer_id'], uuid=rec['consumer_uuid'], generation=rec['consumer_generation'], project=project_obj.Project( context, id=rec['project_id'], external_id=rec['project_external_id']), user=user_obj.User( context, id=rec['user_id'], external_id=rec['user_external_id'])) objs.append( Allocation( id=rec['id'], resource_provider=rp, resource_class=context.rc_cache.string_from_id( rec['resource_class_id']), consumer=consumer, used=rec['used'], created_at=rec['created_at'], updated_at=rec['updated_at'])) return objs def get_all_by_consumer_id(context, consumer_id): db_allocs = _get_allocations_by_consumer_uuid(context, consumer_id) if not db_allocs: return [] # Build up the Consumer object (it's the same for all allocations # since we looked up by consumer ID) db_first = db_allocs[0] consumer = consumer_obj.Consumer( context, id=db_first['consumer_id'], uuid=db_first['consumer_uuid'], generation=db_first['consumer_generation'], consumer_type_id=db_first['consumer_type_id'], project=project_obj.Project( context, id=db_first['project_id'], external_id=db_first['project_external_id']), user=user_obj.User( context, id=db_first['user_id'], external_id=db_first['user_external_id'])) # Build up a list of Allocation objects, setting the Allocation object # fields to the same-named database record field we got from # _get_allocations_by_consumer_id(). # # NOTE(jaypipes): Unlike with get_all_by_resource_provider(), we do # NOT already have the ResourceProvider object so we construct a new # ResourceProvider object below by looking at the resource provider # fields returned by _get_allocations_by_consumer_id(). alloc_list = [ Allocation( id=rec['id'], resource_provider=rp_obj.ResourceProvider( context, id=rec['resource_provider_id'], uuid=rec['resource_provider_uuid'], name=rec['resource_provider_name'], generation=rec['resource_provider_generation']), resource_class=context.rc_cache.string_from_id( rec['resource_class_id']), consumer=consumer, used=rec['used'], created_at=rec['created_at'], updated_at=rec['updated_at']) for rec in db_allocs ] return alloc_list def replace_all(context, alloc_list): """Replace the supplied allocations. :note: This method always deletes all allocations for all consumers referenced in the list of Allocation objects and then replaces the consumer's allocations with the Allocation objects. In doing so, it will end up setting the Allocation.id attribute of each Allocation object. """ # Retry _set_allocations server side if there is a # ResourceProviderConcurrentUpdateDetected. We don't care about # sleeping, we simply want to reset the resource provider objects # and try again. For sake of simplicity (and because we don't have # easy access to the information) we reload all the resource # providers that may be present. retries = context.config.placement.allocation_conflict_retry_count while retries: retries -= 1 try: _set_allocations(context, alloc_list) break except exception.ResourceProviderConcurrentUpdateDetected: LOG.debug('Retrying allocations write on resource provider ' 'generation conflict') # We only want to reload each unique resource provider once. alloc_rp_uuids = set( alloc.resource_provider.uuid for alloc in alloc_list) seen_rps = {} for rp_uuid in alloc_rp_uuids: # NOTE(melwitt): We use a separate database transaction to read # the resource provider because we might be wrapped in an outer # database transaction when we reach here. We want to get an # up-to-date generation value in case a racing request has # changed it after we began an outer transaction and this is # the first time we are reading the resource provider records # during our transaction. db_context_manager = db_api.placement_context_manager with db_context_manager.reader.independent.using(context): seen_rps[rp_uuid] = rp_obj.ResourceProvider.get_by_uuid( context, rp_uuid) for alloc in alloc_list: rp_uuid = alloc.resource_provider.uuid alloc.resource_provider = seen_rps[rp_uuid] else: # We ran out of retries so we need to raise again. # The log will automatically have request id info associated with # it that will allow tracing back to specific allocations. # Attempting to extract specific consumer or resource provider # information from the allocations is not coherent as this # could be multiple consumers and providers. LOG.warning('Exceeded retry limit of %d on allocations write', context.config.placement.allocation_conflict_retry_count) raise exception.ResourceProviderConcurrentUpdateDetected() def delete_all(context, alloc_list): consumer_uuids = set(alloc.consumer.uuid for alloc in alloc_list) alloc_ids = [alloc.id for alloc in alloc_list] _delete_allocations_by_ids(context, alloc_ids) consumer_obj.delete_consumers_if_no_allocations( context, consumer_uuids) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/allocation_candidate.py0000664000175000017500000012706200000000000025741 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import itertools import os_traits from oslo_log import log as logging import sqlalchemy as sa from placement.db.sqlalchemy import models from placement import db_api from placement import exception from placement.objects import research_context as res_ctx from placement.objects import resource_provider as rp_obj from placement.objects import trait as trait_obj from placement import util _ALLOC_TBL = models.Allocation.__table__ _INV_TBL = models.Inventory.__table__ _RP_TBL = models.ResourceProvider.__table__ LOG = logging.getLogger(__name__) class AllocationCandidates(object): """The AllocationCandidates object is a collection of possible allocations that match some request for resources, along with some summary information about the resource providers involved in these allocation candidates. """ def __init__(self, allocation_requests=None, provider_summaries=None): # A collection of allocation possibilities that can be attempted by the # caller that would, at the time of calling, meet the requested # resource constraints self.allocation_requests = allocation_requests # Information about usage and inventory that relate to any provider # contained in any of the AllocationRequest objects in the # allocation_requests field self.provider_summaries = provider_summaries @classmethod def get_by_requests(cls, context, groups, rqparams, nested_aware=True): """Returns an AllocationCandidates object containing all resource providers matching a set of supplied resource constraints, with a set of allocation requests constructed from that list of resource providers. If CONF.placement.randomize_allocation_candidates (on context.config) is True (default is False) then the order of the allocation requests will be randomized. :param context: placement.context.RequestContext object. :param groups: Dict, keyed by suffix, of placement.lib.RequestGroup :param rqparams: A RequestWideParams. :param nested_aware: If False, we are blind to nested architecture and can't pick resources from multiple providers even if they come from the same tree. :return: An instance of AllocationCandidates with allocation_requests and provider_summaries satisfying `requests`, limited according to `limit`. """ try: alloc_reqs, provider_summaries = cls._get_by_requests( context, groups, rqparams, nested_aware=nested_aware) except exception.ResourceProviderNotFound: alloc_reqs, provider_summaries = [], [] return cls( allocation_requests=alloc_reqs, provider_summaries=provider_summaries, ) @staticmethod def _get_by_one_request(rg_ctx, rw_ctx): """Get allocation candidates for one RequestGroup. Must be called from within an placement_context_manager.reader (or writer) context. :param rg_ctx: RequestGroupSearchContext. :param rw_ctx: RequestWideSearchContext. """ if not rg_ctx.use_same_provider and ( rg_ctx.exists_sharing or rg_ctx.exists_nested): # TODO(jaypipes): The check/callout to handle trees goes here. # Build a dict, keyed by resource class internal ID, of lists of # internal IDs of resource providers that share some inventory for # each resource class requested. # If there aren't any providers that have any of the # required traits, just exit early... if rg_ctx.required_traits: # TODO(cdent): Now that there is also a forbidden_trait_map # it should be possible to further optimize this attempt at # a quick return, but we leave that to future patches for # now. # NOTE(gibi): this optimization works by flattening the # required_trait nested list. So if the request contains # ((A or B) and C) trait request then we check if there is any # RP with either A, B or C. If none then we know that there is # no RP that can satisfy the original query either. trait_rps = res_ctx.get_provider_ids_having_any_trait( rg_ctx.context, { trait for any_traits in rg_ctx.required_traits for trait in any_traits }, ) if not trait_rps: return set() rp_candidates = res_ctx.get_trees_matching_all(rg_ctx, rw_ctx) return _alloc_candidates_multiple_providers( rg_ctx, rw_ctx, rp_candidates) # Either we are processing a single-RP request group, or there are no # sharing providers that (help) satisfy the request. Get a list of # tuples of (internal provider ID, root provider ID) that have ALL # the requested resources and more efficiently construct the # allocation requests. rp_tuples = res_ctx.get_provider_ids_matching(rg_ctx) return _alloc_candidates_single_provider(rg_ctx, rw_ctx, rp_tuples) @classmethod @db_api.placement_context_manager.reader def _get_by_requests(cls, context, groups, rqparams, nested_aware=True): rw_ctx = res_ctx.RequestWideSearchContext( context, rqparams, nested_aware) sharing = res_ctx.get_sharing_providers(context) # TODO(efried): If we ran anchors_for_sharing_providers here, we could # narrow to only sharing providers associated with our filtered trees. # Unclear whether this would be cheaper than waiting until we've # filtered sharing providers for other things (like resources). seen_rcs = set() candidates = {} for suffix, group in groups.items(): rg_ctx = res_ctx.RequestGroupSearchContext( context, group, rw_ctx.has_trees, sharing, suffix) # Which resource classes are requested in more than one group? for rc in rg_ctx.rcs: if rc in seen_rcs: rw_ctx.multi_group_rcs.add(rc) else: seen_rcs.add(rc) alloc_reqs = cls._get_by_one_request(rg_ctx, rw_ctx) LOG.debug("%s (suffix '%s') returned %d matches", str(group), str(suffix), len(alloc_reqs)) if not alloc_reqs: # Shortcut: If any one group resulted in no candidates, the # whole operation is shot. return [], [] # Mark each allocation request according to whether its # corresponding RequestGroup required it to be restricted to a # single provider. We'll need this later to evaluate group_policy. for areq in alloc_reqs: areq.use_same_provider = group.use_same_provider candidates[suffix] = alloc_reqs # At this point, each alloc_requests in `candidates` is independent of # the others. We need to fold them together such that each allocation # request satisfies *all* the incoming `requests`. The `candidates` # dict is guaranteed to contain entries for all suffixes, or we would # have short-circuited above. alloc_request_objs, summary_objs = _merge_candidates( candidates, rw_ctx) alloc_request_objs, summary_objs = rw_ctx.exclude_nested_providers( alloc_request_objs, summary_objs) return rw_ctx.limit_results(alloc_request_objs, summary_objs) class AllocationRequest(object): __slots__ = ('anchor_root_provider_uuid', 'use_same_provider', 'resource_requests', 'mappings') def __init__(self, anchor_root_provider_uuid=None, use_same_provider=None, resource_requests=None, mappings=None): # UUID of (the root of the tree including) the non-sharing resource # provider associated with this AllocationRequest. Internal use only, # not included when the object is serialized for output. self.anchor_root_provider_uuid = anchor_root_provider_uuid # Whether all AllocationRequestResources in this AllocationRequest are # required to be satisfied by the same provider (based on the # corresponding RequestGroup's use_same_provider attribute). Internal # use only, not included when the object is serialized for output. self.use_same_provider = use_same_provider self.resource_requests = resource_requests or [] # mappings will be presented as a dict during output, so ensure we have # a reasonable default here, despite mappings always being set. self.mappings = mappings or dict() def __repr__(self): anchor = (self.anchor_root_provider_uuid[-8:] if self.anchor_root_provider_uuid else '') usp = (self.use_same_provider if self.use_same_provider is not None else '') repr_str = ('%s(anchor=...%s, same_provider=%s, ' 'resource_requests=[%s])' % (self.__class__.__name__, anchor, usp, ', '.join([str(arr) for arr in self.resource_requests]))) return repr_str def __eq__(self, other): return (set(self.resource_requests) == set(other.resource_requests) and self.mappings == other.mappings) def __hash__(self): # We need a stable sort order on the resource requests to get an # accurate hash. To avoid needing to update the method everytime # the structure of an AllocationRequestResource changes, we can # sort on the hash of each request resource. sorted_rr = sorted(self.resource_requests, key=lambda x: hash(x)) return hash(tuple(sorted_rr)) def __copy__(self): # This is shallow copy, so resource_requests and mappings are the # same objects as prior to the copy. return self.__class__( anchor_root_provider_uuid=self.anchor_root_provider_uuid, use_same_provider=self.use_same_provider, resource_requests=self.resource_requests, mappings=self.mappings ) class AllocationRequestResource(object): __slots__ = 'resource_provider', 'resource_class', 'amount' def __init__(self, resource_provider=None, resource_class=None, amount=None): self.resource_provider = resource_provider self.resource_class = resource_class self.amount = amount def __eq__(self, other): return ((self.resource_provider.id == other.resource_provider.id) and (self.resource_class == other.resource_class) and (self.amount == other.amount)) def __hash__(self): return hash((self.resource_provider.id, self.resource_class, self.amount)) def __copy__(self): # This is shallow copy, so resource_provider is the same object as # prior to the copy. resource_class is a string here, not a # ResourceClass object return self.__class__( resource_provider=self.resource_provider, resource_class=self.resource_class, amount=self.amount) class ProviderSummary(object): __slots__ = 'resource_provider', 'resources', 'traits' def __init__(self, resource_provider=None, resources=None, traits=None): self.resource_provider = resource_provider self.resources = resources or [] self.traits = traits or [] class ProviderSummaryResource(object): __slots__ = 'resource_class', 'capacity', 'used', 'max_unit' def __init__(self, resource_class=None, capacity=None, used=None, max_unit=None): self.resource_class = resource_class self.capacity = capacity self.used = used # Internal use only; not included when the object is serialized for # output. self.max_unit = max_unit def _alloc_candidates_multiple_providers(rg_ctx, rw_ctx, rp_candidates): """Returns a set of allocation requests for a supplied set of requested resource amounts and tuples of (rp_id, root_id, rc_id). The supplied resource provider trees have capacity to satisfy ALL of the resources in the requested resources as well as ALL required traits that were requested by the user. This is a code path to get results for a RequestGroup with use_same_provider=False. In this scenario, we are able to use multiple providers within the same provider tree including sharing providers to satisfy different resources involved in a single request group. :param rg_ctx: RequestGroupSearchContext. :param rw_ctx: RequestWideSearchContext :param rp_candidates: RPCandidates object representing the providers that satisfy the request for resources. """ if not rp_candidates: return set() # Get all the root resource provider IDs. We should include the first # values of rp_tuples because while sharing providers are root providers, # they have their "anchor" providers for the second value. root_ids = rp_candidates.all_rps # Get a dict, keyed by resource provider internal ID, of trait string names # that provider has associated with it prov_traits = trait_obj.get_traits_by_provider_tree( rg_ctx.context, root_ids) # Extend rw_ctx.summaries_by_id dict, keyed by resource provider internal # ID, of ProviderSummary objects for all providers _build_provider_summaries(rg_ctx.context, rw_ctx, root_ids, prov_traits) # Get a dict, keyed by root provider internal ID, of a dict, keyed by # resource class internal ID, of lists of AllocationRequestResource objects tree_dict = collections.defaultdict(lambda: collections.defaultdict(list)) rc_cache = rg_ctx.context.rc_cache for rp in rp_candidates.rps_info: rp_summary = rw_ctx.summaries_by_id[rp.id] tree_dict[rp.root_id][rp.rc_id].append( AllocationRequestResource( resource_provider=rp_summary.resource_provider, resource_class=rc_cache.string_from_id(rp.rc_id), amount=rg_ctx.resources[rp.rc_id])) # Next, build up a set of allocation requests. These allocation requests # are AllocationRequest objects, containing resource provider UUIDs, # resource class names and amounts to consume from that resource provider alloc_requests = set() # Let's look into each tree for root_id, alloc_dict in tree_dict.items(): # Get request_groups, which is a list of lists of # AllocationRequestResource(ARR) per requested resource class(rc). # For example, if we have the alloc_dict: # {rc1_id: [ARR(rc1, rp1), ARR(rc1, rp2)], # rc2_id: [ARR(rc2, rp1), ARR(rc2, rp2)], # rc3_id: [ARR(rc3, rp1)]} # then the request_groups would be something like # [[ARR(rc1, rp1), ARR(rc1, rp2)], # [ARR(rc2, rp1), ARR(rc2, rp2)], # [ARR(rc3, rp1)]] # , which should be ordered by the resource class id. request_groups = [val for key, val in sorted(alloc_dict.items())] root_summary = rw_ctx.summaries_by_id[root_id] root_uuid = root_summary.resource_provider.uuid root_alloc_reqs = set() # Using itertools.product, we get all the combinations of resource # providers in a tree. # For example, the sample in the comment above becomes: # [(ARR(rc1, ss1), ARR(rc2, ss1), ARR(rc3, ss1)), # (ARR(rc1, ss1), ARR(rc2, ss2), ARR(rc3, ss1)), # (ARR(rc1, ss2), ARR(rc2, ss1), ARR(rc3, ss1)), # (ARR(rc1, ss2), ARR(rc2, ss2), ARR(rc3, ss1))] for res_requests in itertools.product(*request_groups): if not _check_traits_for_alloc_request( res_requests, rw_ctx.summaries_by_id, rg_ctx.required_trait_names, rg_ctx.forbidden_traits.keys()): # This combination doesn't satisfy trait constraints continue mappings = collections.defaultdict(set) for rr in res_requests: mappings[rg_ctx.suffix].add(rr.resource_provider.uuid) alloc_req = AllocationRequest(resource_requests=list(res_requests), anchor_root_provider_uuid=root_uuid, mappings=mappings) root_alloc_reqs.add(alloc_req) alloc_requests |= root_alloc_reqs return alloc_requests def _alloc_candidates_single_provider(rg_ctx, rw_ctx, rp_tuples): """Returns a set of allocation requests for a supplied set of requested resource amounts and resource providers. The supplied resource providers have capacity to satisfy ALL of the resources in the requested resources as well as ALL required traits that were requested by the user. This is used in two circumstances: - To get results for a RequestGroup with use_same_provider=True. - As an optimization when no sharing providers satisfy any of the requested resources, and nested providers are not in play. In these scenarios, we can more efficiently build the list of AllocationRequest and ProviderSummary objects due to not having to determine requests across multiple providers. :param rg_ctx: RequestGroupSearchContext :param rw_ctx: RequestWideSearchContext :param rp_tuples: List of two-tuples of (provider ID, root provider ID)s for providers that matched the requested resources """ if not rp_tuples: return set() # Get all root resource provider IDs. root_ids = set(p[1] for p in rp_tuples) # Get a dict, keyed by resource provider internal ID, of trait string names # that provider has associated with it prov_traits = trait_obj.get_traits_by_provider_tree( rg_ctx.context, root_ids) # Extend rw_ctx.summaries_by_id dict, keyed by resource provider internal # ID, of ProviderSummary objects for all providers _build_provider_summaries(rg_ctx.context, rw_ctx, root_ids, prov_traits) # Next, build up a list of allocation requests. These allocation requests # are AllocationRequest objects, containing resource provider UUIDs, # resource class names and amounts to consume from that resource provider alloc_requests = [] for rp_id, root_id in rp_tuples: rp_summary = rw_ctx.summaries_by_id[rp_id] req_obj = _allocation_request_for_provider( rg_ctx.context, rg_ctx.resources, rp_summary.resource_provider, suffix=rg_ctx.suffix) # Exclude this if its anchor (which is its root) isn't in our # prefiltered list of anchors if rw_ctx.in_filtered_anchors(root_id): alloc_requests.append(req_obj) # If this is a sharing provider, we have to include an extra # AllocationRequest for every possible anchor. traits = rp_summary.traits if os_traits.MISC_SHARES_VIA_AGGREGATE in traits: anchors = res_ctx.anchors_for_sharing_providers( rg_ctx.context, [rp_summary.resource_provider.id]) for anchor in anchors: # We already added self if anchor.anchor_id == root_id: continue # Only include if anchor is viable if not rw_ctx.in_filtered_anchors(anchor.anchor_id): continue req_obj = copy.copy(req_obj) req_obj.anchor_root_provider_uuid = anchor.anchor_uuid alloc_requests.append(req_obj) return alloc_requests def _allocation_request_for_provider(context, requested_resources, provider, suffix): """Returns an AllocationRequest object containing AllocationRequestResource objects for each resource class in the supplied requested resources dict. :param requested_resources: dict, keyed by resource class ID, of amounts being requested for that resource class :param provider: ResourceProvider object representing the provider of the resources. :param suffix: The suffix of the RequestGroup these resources are satisfying. """ resource_requests = [ AllocationRequestResource( resource_provider=provider, resource_class=context.rc_cache.string_from_id(rc_id), amount=amount ) for rc_id, amount in requested_resources.items() ] # NOTE(efried): This method only produces an AllocationRequest with its # anchor in its own tree. If the provider is a sharing provider, the # caller needs to identify the other anchors with which it might be # associated. # NOTE(tetsuro): The AllocationRequest has empty resource_requests for a # resourceless request. Still, it has the rp uuid in the mappings field. mappings = {suffix: set([provider.uuid])} return AllocationRequest( resource_requests=resource_requests, anchor_root_provider_uuid=provider.root_provider_uuid, mappings=mappings) def _build_provider_summaries(context, rw_ctx, root_ids, prov_traits): """Given a list of dicts of usage information and a map of providers to their associated string traits, returns a dict, keyed by resource provider ID, of ProviderSummary objects. Warning: This is side-effecty: It is extending the rw_ctx.summaries_by_id dict. Nothing is returned. :param context: placement.context.RequestContext object :param rw_ctx: placement.research_context.RequestWideSearchContext :param root_ids: A set of root resource provider ids :param prov_traits: A dict, keyed by internal resource provider ID, of string trait names associated with that provider """ # Filter resource providers by those we haven't seen yet. new_roots = root_ids - set(rw_ctx.summaries_by_id) if not new_roots: return # Get a dict-like usage information of resource providers in a tree where # at least one member of the tree is contributing resources or traits to # an allocation candidate, which has the following structure: # { # 'resource_provider_id': , # 'resource_provider_uuid': , # 'resource_class_id': , # 'total': integer, # 'reserved': integer, # 'allocation_ratio': float, # } usages = res_ctx.get_usages_by_provider_trees(context, new_roots) # Before we go creating provider summary objects, first grab all the # provider information (including root, parent and UUID information) for # the providers. provider_ids = _provider_ids_from_root_ids(context, new_roots) # Build up a dict, keyed by internal resource provider ID, of # ProviderSummary objects containing one or more ProviderSummaryResource # objects representing the resources the provider has inventory for. for usage in usages: rp_id = usage.resource_provider_id summary = rw_ctx.summaries_by_id.get(rp_id) if not summary: pids = provider_ids[rp_id] parent_id = pids.parent_id # If there is a parent, we can rely on it being in provider_ids # because for any single provider, it also contains the full # ancestry. parent_uuid = provider_ids[parent_id].uuid if parent_id else None # Update the parent_uuid_by_rp_uuid cache here. We know that we # will visit all providers in all trees in play during # _build_provider_summaries, so now is a good time. rw_ctx.parent_uuid_by_rp_uuid[pids.uuid] = parent_uuid summary = ProviderSummary( resource_provider=rp_obj.ResourceProvider( context, id=pids.id, uuid=pids.uuid, root_provider_uuid=provider_ids[pids.root_id].uuid, parent_provider_uuid=parent_uuid), resources=[], ) summary.traits = prov_traits[rp_id] rw_ctx.summaries_by_id[rp_id] = summary rc_id = usage.resource_class_id if rc_id is None: # NOTE(tetsuro): This provider doesn't have any inventory itself. # But we include this provider in summaries since another # provider in the same tree will be in the "allocation_request". # Let's skip the following and leave "ProviderSummary.resources" # field empty. continue # NOTE(jaypipes): usage.used may be None due to the LEFT JOIN of # the usages subquery, so we coerce NULL values to 0 here. It may # also be a Decimal, as that's the type that mysql tends to return # when func.sum is used in a query. We need an int, otherwise later # JSON serialization will not work. used = int(usage.used or 0) allocation_ratio = usage.allocation_ratio cap = int((usage.total - usage.reserved) * allocation_ratio) rc_name = context.rc_cache.string_from_id(rc_id) rpsr = ProviderSummaryResource( resource_class=rc_name, capacity=cap, used=used, max_unit=usage.max_unit, ) # Construct a dict, keyed by resource provider + resource class, of # ProviderSummaryResource. This will be used to do a final capacity # check/filter on each merged AllocationRequest. psum_key = (rp_id, rc_name) rw_ctx.psum_res_by_rp_rc[psum_key] = rpsr summary.resources.append(rpsr) def _check_traits_for_alloc_request(res_requests, summaries, required_traits, forbidden_traits): """Given a list of AllocationRequestResource objects, check if that combination can provide trait constraints. If it can, returns all resource provider internal IDs in play, else return an empty list. TODO(tetsuro): For optimization, we should move this logic to SQL in res_ctx.get_trees_matching_all(). :param res_requests: a list of AllocationRequestResource objects that have resource providers to be checked if they collectively satisfy trait constraints in the required_traits and forbidden_traits parameters. :param summaries: dict, keyed by resource provider id, of ProviderSummary objects containing usage and trait information for resource providers involved in the overall request :param required_traits: A list of set of trait names where traits in the sets are in OR relationship while traits in two different sets are in AND relationship. Each *allocation request's set of providers* must *collectively* fulfill this trait expression. :param forbidden_traits: A set of trait names that a resource provider must not have. """ all_prov_ids = [] all_traits = set() for res_req in res_requests: rp_id = res_req.resource_provider.id rp_summary = summaries[rp_id] rp_traits = set(rp_summary.traits) # Check if there are forbidden_traits conflict_traits = set(forbidden_traits) & set(rp_traits) if conflict_traits: LOG.debug('Excluding resource provider %s, it has ' 'forbidden traits: (%s).', rp_id, ', '.join(conflict_traits)) return [] all_prov_ids.append(rp_id) all_traits |= rp_traits # We need a match for *all* the items from the outer list of the # required_traits as that describes AND relationship, and we need at least # *one match* per nested trait set as that set describes OR relationship # so collect all the matches with the nested sets trait_matches = [ any_traits.intersection(all_traits) for any_traits in required_traits] # if some internal sets do not match to the provided traits then we have # missing trait (trait set) if not all(trait_matches): missing_traits = [ '(' + ' or '.join(any_traits) + ')' for any_traits, match in zip(required_traits, trait_matches) if not match ] LOG.debug( 'Excluding a set of allocation candidate %s : ' 'missing traits %s are not satisfied.', all_prov_ids, ' and '.join(any_traits for any_traits in missing_traits)) return [] return all_prov_ids def _consolidate_allocation_requests(areqs, rw_ctx): """Consolidates a list of AllocationRequest into one. :param areqs: A list containing one AllocationRequest for each input RequestGroup. This may mean that multiple resource_requests contain resource amounts of the same class from the same provider. :return: A single consolidated AllocationRequest, containing no resource_requests with duplicated (resource_provider, resource_class). """ # Construct a dict, keyed by resource provider UUID + resource class, of # AllocationRequestResource, consolidating as we go. arrs_by_rp_rc = {} # areqs must have at least one element. Save the anchor to populate the # returned AllocationRequest. anchor_rp_uuid = areqs[0].anchor_root_provider_uuid mappings = collections.defaultdict(set) for areq in areqs: # Sanity check: the anchor should be the same for every areq if anchor_rp_uuid != areq.anchor_root_provider_uuid: # This should never happen. If it does, it's a dev bug. raise ValueError( "Expected every AllocationRequest in " "`_consolidate_allocation_requests` to have the same " "anchor!") for arr in areq.resource_requests: key = (arr.resource_provider.id, arr.resource_class) if key not in arrs_by_rp_rc: arrs_by_rp_rc[key] = rw_ctx.copy_arr_if_needed(arr) else: arrs_by_rp_rc[key].amount += arr.amount for suffix, providers in areq.mappings.items(): mappings[suffix].update(providers) return AllocationRequest( resource_requests=list(arrs_by_rp_rc.values()), anchor_root_provider_uuid=anchor_rp_uuid, mappings=mappings) def _get_areq_list_generators(areq_lists_by_anchor, all_suffixes): """Returns a generator for each anchor provider that generates viable candidates (areq_lists) for the given anchor """ return [ # We're using itertools.product to go from this: # areq_lists_by_suffix = { # '': [areq__A, areq__B, ...], # '1': [areq_1_A, areq_1_B, ...], # ... # '42': [areq_42_A, areq_42_B, ...], # } # to this: # [ [areq__A, areq_1_A, ..., areq_42_A], Each of these lists is one # [areq__A, areq_1_A, ..., areq_42_B], solution to return. # [areq__A, areq_1_B, ..., areq_42_A], Each solution contains one # [areq__A, areq_1_B, ..., areq_42_B], AllocationRequest from each # [areq__B, areq_1_A, ..., areq_42_A], RequestGroup. So taken as a # [areq__B, areq_1_A, ..., areq_42_B], whole, each list is a viable # [areq__B, areq_1_B, ..., areq_42_A], (preliminary) candidate to # [areq__B, areq_1_B, ..., areq_42_B], return. # ..., # ] itertools.product(*list(areq_lists_by_suffix.values())) for areq_lists_by_suffix in areq_lists_by_anchor.values() # Filter out any entries that don't have allocation requests for # *all* suffixes (i.e. all RequestGroups) if set(areq_lists_by_suffix) == all_suffixes ] def _generate_areq_lists(rw_ctx, areq_lists_by_anchor, all_suffixes): strategy = ( rw_ctx.config.placement.allocation_candidates_generation_strategy) generators = _get_areq_list_generators(areq_lists_by_anchor, all_suffixes) if strategy == "depth-first": # Generates all solutions from the first anchor before moving to the # next return itertools.chain(*generators) if strategy == "breadth-first": # Generates solutions from anchors in a round-robin manner. So the # number of solutions generated are balanced between each viable # anchors. return util.roundrobin(*generators) raise ValueError("Strategy '%s' not recognized" % strategy) # TODO(efried): Move _merge_candidates to rw_ctx? def _merge_candidates(candidates, rw_ctx): """Given a dict, keyed by RequestGroup suffix, of allocation_requests, produce a single tuple of (allocation_requests, provider_summaries) that appropriately incorporates the elements from each. Each alloc_reqs in `candidates` satisfies one RequestGroup. This method creates a list of alloc_reqs, *each* of which satisfies *all* of the RequestGroups. For that merged list of alloc_reqs, a corresponding provider_summaries is produced. :param candidates: A dict, keyed by suffix string or '', of a set of allocation_requests to be merged. :param rw_ctx: RequestWideSearchContext. :return: A tuple of (allocation_requests, provider_summaries). """ # Build a dict, keyed by anchor root provider UUID, of dicts, keyed by # suffix, of nonempty lists of AllocationRequest. Each inner dict must # possess all of the suffix keys to be viable (i.e. contains at least # one AllocationRequest per RequestGroup). # # areq_lists_by_anchor = # { anchor_root_provider_uuid: { # '': [AllocationRequest, ...], \ This dict must contain # '1': [AllocationRequest, ...], \ exactly one nonempty list per # ... / suffix to be viable. That # '42': [AllocationRequest, ...], / filtering is done later. # }, # ... # } areq_lists_by_anchor = collections.defaultdict( lambda: collections.defaultdict(list)) for suffix, areqs in candidates.items(): for areq in areqs: anchor = areq.anchor_root_provider_uuid areq_lists_by_anchor[anchor][suffix].append(areq) # Create all combinations picking one AllocationRequest from each list # for each anchor. areqs = set() all_suffixes = set(candidates) num_granular_groups = len(all_suffixes - set([''])) max_a_c = rw_ctx.config.placement.max_allocation_candidates for areq_list in _generate_areq_lists( rw_ctx, areq_lists_by_anchor, all_suffixes ): # At this point, each AllocationRequest in areq_list is still # marked as use_same_provider. This is necessary to filter by group # policy, which enforces how these interact with each other. # TODO(efried): Move _satisfies_group_policy to rw_ctx? if not _satisfies_group_policy( areq_list, rw_ctx.group_policy, num_granular_groups): continue if not _satisfies_same_subtree(areq_list, rw_ctx): continue # Now we go from this (where 'arr' is AllocationRequestResource): # [ areq__B(arrX, arrY, arrZ), # areq_1_A(arrM, arrN), # ..., # areq_42_B(arrQ) # ] # to this: # areq_combined(arrX, arrY, arrZ, arrM, arrN, arrQ) # Note that the information telling us which RequestGroup led to # which piece of the AllocationRequest has been lost from the outer # layer of the data structure (the key of areq_lists_by_suffix). # => We needed that to be present for the previous filter; we need # it to be *absent* for the next one. # => However, it still exists embedded in each # AllocationRequestResource. That's needed to construct the # mappings for the output. areq = _consolidate_allocation_requests(areq_list, rw_ctx) # Since we sourced this AllocationRequest from multiple # *independent* queries, it's possible that the combined result # now exceeds capacity where amounts of the same RP+RC were # folded together. So do a final capacity check/filter. if rw_ctx.exceeds_capacity(areq): continue areqs.add(areq) if max_a_c >= 0 and len(areqs) >= max_a_c: break # It's possible we've filtered out everything. If so, short out. if not areqs: return [], [] # Now we have to produce provider summaries. The provider summaries in # rw_ctx.summary_by_id contain all the information; we just need to filter # it down to only the providers in trees represented by our merged list of # allocation requests. tree_uuids = set() for areq in areqs: for arr in areq.resource_requests: tree_uuids.add(arr.resource_provider.root_provider_uuid) psums = [ psum for psum in rw_ctx.summaries_by_id.values() if psum.resource_provider.root_provider_uuid in tree_uuids] LOG.debug('Merging candidates yields %d allocation requests and %d ' 'provider summaries', len(areqs), len(psums)) return list(areqs), psums def _satisfies_group_policy(areqs, group_policy, num_granular_groups): """Applies group_policy to a list of AllocationRequest. Returns True or False, indicating whether this list of AllocationRequest satisfies group_policy, as follows: * "isolate": Each AllocationRequest with use_same_provider=True is satisfied by a single resource provider. If the "isolate" policy is in effect, each such AllocationRequest must be satisfied by a *unique* resource provider. * "none" or None: Always returns True. :param areqs: A list containing one AllocationRequest for each input RequestGroup. :param group_policy: String indicating how RequestGroups should interact with each other. If the value is "isolate", we will return False if AllocationRequests that came from RequestGroups keyed by nonempty suffixes are satisfied by the same provider. :param num_granular_groups: The number of granular (use_same_provider=True) RequestGroups in the request. :return: True if areqs satisfies group_policy; False otherwise. """ if group_policy != 'isolate': # group_policy="none" means no filtering return True # The number of unique resource providers referenced in the request groups # having use_same_provider=True must be equal to the number of granular # groups. num_granular_groups_in_areqs = len(set().union(*( # We can reliably use the first value of provider uuids in mappings: # all the resource_requests are satisfied by the same provider # by definition because use_same_provider is True. list(areq.mappings.values())[0] for areq in areqs if areq.use_same_provider))) if num_granular_groups == num_granular_groups_in_areqs: return True LOG.debug('Excluding the following set of AllocationRequest because ' 'group_policy=isolate and the number of granular groups in the ' 'set (%d) does not match the number of granular groups in the ' 'request (%d): %s', num_granular_groups_in_areqs, num_granular_groups, str(areqs)) return False def _satisfies_same_subtree(areqs, rw_ctx): """Applies same_subtree policy to a list of AllocationRequest. :param areqs: A list containing one AllocationRequest for each input RequestGroup. :param rw_ctx: The RequestWideSearchContext for this request, from that use the following fields: same_subtrees: A list of sets of request group suffixes strings. All of the resource providers satisfying the specified request groups must be rooted at one of the resource providers satisfying the request groups. parent_uuid_by_rp_uuid: A dict of parent uuids keyed by rp uuids. :return: True if areqs satisfies same_subtree policy; False otherwise. """ for same_subtree in rw_ctx.same_subtrees: # Collect RP uuids that must satisfy a single same_subtree constraint. rp_uuids = set().union(*(areq.mappings.get(suffix) for areq in areqs for suffix in same_subtree if areq.mappings.get(suffix))) if not _check_same_subtree(rp_uuids, rw_ctx.parent_uuid_by_rp_uuid): return False return True def _check_same_subtree(rp_uuids, parent_uuid_by_rp_uuid): """Returns True if given rp uuids are all in the same subtree. Note: The rps are in the same subtree means all the providers are rooted at one of the providers """ if len(rp_uuids) == 1: return True # A set of uuids of common ancestors of each rp in question common_ancestors = set.intersection(*( _get_ancestors_by_one_uuid(rp_uuid, parent_uuid_by_rp_uuid) for rp_uuid in rp_uuids)) # if any of the rp_uuid is in the common_ancestors set, then # we know that, that rp_uuid is the root of the other rp_uuids # in this same_subtree constraint. return len(common_ancestors.intersection(rp_uuids)) != 0 def _get_ancestors_by_one_uuid( rp_uuid, parent_uuid_by_rp_uuid, ancestors=None): """Returns a set of uuids of ancestors for a given rp uuid""" if ancestors is None: ancestors = set([rp_uuid]) parent_uuid = parent_uuid_by_rp_uuid[rp_uuid] if parent_uuid is None: return ancestors ancestors.add(parent_uuid) return _get_ancestors_by_one_uuid( parent_uuid, parent_uuid_by_rp_uuid, ancestors=ancestors) def _provider_ids_from_root_ids(context, root_ids): """Given an iterable of internal root resource provider IDs, returns a dict, keyed by internal provider Id, of sqla objects describing those providers under the given root providers. :param root_ids: iterable of root provider IDs for trees to look up :returns: dict, keyed by internal provider Id, of sqla objects with the following attributes: id: resource provider internal id uuid: resource provider uuid parent_id: internal id of the resource provider's parent provider (None if there is no parent) root_id: internal id of the resource providers's root provider """ # SELECT # rp.id, rp.uuid, rp.parent_provider_id, rp.root_provider.id # FROM resource_providers AS rp # WHERE rp.root_provider_id IN ($root_ids) me = sa.alias(_RP_TBL, name="me") sel = sa.select( me.c.id, me.c.uuid, me.c.parent_provider_id.label('parent_id'), me.c.root_provider_id.label('root_id'), ).where( me.c.root_provider_id.in_(sa.bindparam('root_ids', expanding=True)) ) ret = {} for r in context.session.execute(sel, {'root_ids': list(root_ids)}): ret[r.id] = r return ret ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/consumer.py0000664000175000017500000002300000000000000023436 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc import sqlalchemy as sa from placement.db.sqlalchemy import models from placement import db_api from placement import exception from placement.objects import project as project_obj from placement.objects import user as user_obj CONSUMER_TBL = models.Consumer.__table__ _ALLOC_TBL = models.Allocation.__table__ @db_api.placement_context_manager.writer def create_incomplete_consumers(ctx, batch_size): """Finds all the consumer records that are missing for allocations and creates consumer records for them, using the "incomplete consumer" project and user CONF options. Returns a tuple containing two identical elements with the number of consumer records created, since this is the expected return format for data migration routines. """ # Create a record in the projects table for our incomplete project incomplete_proj_id = project_obj.ensure_incomplete_project(ctx) # Create a record in the users table for our incomplete user incomplete_user_id = user_obj.ensure_incomplete_user(ctx) # Create a consumer table record for all consumers where # allocations.consumer_id doesn't exist in the consumers table. Use the # incomplete consumer project and user ID. alloc_to_consumer = sa.outerjoin( _ALLOC_TBL, CONSUMER_TBL, _ALLOC_TBL.c.consumer_id == CONSUMER_TBL.c.uuid) sel = sa.select( _ALLOC_TBL.c.consumer_id, incomplete_proj_id, incomplete_user_id, ) sel = sel.select_from(alloc_to_consumer) sel = sel.where(CONSUMER_TBL.c.id.is_(None)) # NOTE(mnaser): It is possible to have multiple consumers having many # allocations to the same resource provider, which would # make the INSERT FROM SELECT fail due to duplicates. sel = sel.group_by(_ALLOC_TBL.c.consumer_id) sel = sel.limit(batch_size) target_cols = ['uuid', 'project_id', 'user_id'] ins_stmt = CONSUMER_TBL.insert().from_select(target_cols, sel) res = ctx.session.execute(ins_stmt) return res.rowcount, res.rowcount @db_api.placement_context_manager.writer def delete_consumers_if_no_allocations(ctx, consumer_uuids): """Looks to see if any of the supplied consumers has any allocations and if not, deletes the consumer record entirely. :param ctx: `placement.context.RequestContext` that contains an oslo_db Session :param consumer_uuids: UUIDs of the consumers to check and maybe delete """ # Delete consumers that are not referenced in the allocations table cons_to_allocs_join = sa.outerjoin( CONSUMER_TBL, _ALLOC_TBL, CONSUMER_TBL.c.uuid == _ALLOC_TBL.c.consumer_id) subq = sa.select(CONSUMER_TBL.c.uuid).select_from(cons_to_allocs_join) subq = subq.where(sa.and_( _ALLOC_TBL.c.consumer_id.is_(None), CONSUMER_TBL.c.uuid.in_(consumer_uuids))) no_alloc_consumers = [r[0] for r in ctx.session.execute(subq).fetchall()] del_stmt = CONSUMER_TBL.delete() del_stmt = del_stmt.where(CONSUMER_TBL.c.uuid.in_(no_alloc_consumers)) ctx.session.execute(del_stmt) @db_api.placement_context_manager.reader def _get_consumer_by_uuid(ctx, uuid): # The SQL for this looks like the following: # SELECT # c.id, c.uuid, c.consumer_type_id, # p.id AS project_id, p.external_id AS project_external_id, # u.id AS user_id, u.external_id AS user_external_id, # c.updated_at, c.created_at # FROM consumers c # INNER JOIN projects p # ON c.project_id = p.id # INNER JOIN users u # ON c.user_id = u.id # WHERE c.uuid = $uuid consumers = sa.alias(CONSUMER_TBL, name="c") projects = sa.alias(project_obj.PROJECT_TBL, name="p") users = sa.alias(user_obj.USER_TBL, name="u") c_to_p_join = sa.join( consumers, projects, consumers.c.project_id == projects.c.id) c_to_u_join = sa.join( c_to_p_join, users, consumers.c.user_id == users.c.id) sel = sa.select( consumers.c.id, consumers.c.uuid, consumers.c.consumer_type_id, projects.c.id.label("project_id"), projects.c.external_id.label("project_external_id"), users.c.id.label("user_id"), users.c.external_id.label("user_external_id"), consumers.c.generation, consumers.c.updated_at, consumers.c.created_at, ).select_from(c_to_u_join) sel = sel.where(consumers.c.uuid == uuid) res = ctx.session.execute(sel).fetchone() if not res: raise exception.ConsumerNotFound(uuid=uuid) return dict(res._mapping) @db_api.placement_context_manager.writer def _delete_consumer(ctx, consumer): """Deletes the supplied consumer. :param ctx: `placement.context.RequestContext` that contains an oslo_db Session :param consumer: `Consumer` whose generation should be updated. """ del_stmt = CONSUMER_TBL.delete().where(CONSUMER_TBL.c.id == consumer.id) ctx.session.execute(del_stmt) class Consumer(object): def __init__(self, context, id=None, uuid=None, project=None, user=None, generation=None, consumer_type_id=None, updated_at=None, created_at=None): self._context = context self.id = id self.uuid = uuid self.project = project self.user = user self.generation = generation self.consumer_type_id = consumer_type_id self.updated_at = updated_at self.created_at = created_at @staticmethod def _from_db_object(ctx, target, source): target.id = source['id'] target.uuid = source['uuid'] target.generation = source['generation'] target.consumer_type_id = source['consumer_type_id'] target.created_at = source['created_at'] target.updated_at = source['updated_at'] target.project = project_obj.Project( ctx, id=source['project_id'], external_id=source['project_external_id']) target.user = user_obj.User( ctx, id=source['user_id'], external_id=source['user_external_id']) target._context = ctx return target @classmethod def get_by_uuid(cls, ctx, uuid): res = _get_consumer_by_uuid(ctx, uuid) return cls._from_db_object(ctx, cls(ctx), res) def create(self): @db_api.placement_context_manager.writer def _create_in_db(ctx): db_obj = models.Consumer( uuid=self.uuid, project_id=self.project.id, user_id=self.user.id, consumer_type_id=self.consumer_type_id) try: db_obj.save(ctx.session) # NOTE(jaypipes): We don't do the normal _from_db_object() # thing here because models.Consumer doesn't have a # project_external_id or user_external_id attribute. self.id = db_obj.id self.generation = db_obj.generation except db_exc.DBDuplicateEntry: raise exception.ConsumerExists(uuid=self.uuid) _create_in_db(self._context) def update(self): """Used to update the consumer's project and user information without incrementing the consumer's generation. """ @db_api.placement_context_manager.writer def _update_in_db(ctx): upd_stmt = CONSUMER_TBL.update().values( project_id=self.project.id, user_id=self.user.id, consumer_type_id=self.consumer_type_id) # NOTE(jaypipes): We add the generation check to the WHERE clause # above just for safety. We don't need to check that the statement # actually updated a single row. If it did not, then the # consumer.increment_generation() call that happens in # AllocationList.replace_all() will end up raising # ConcurrentUpdateDetected anyway upd_stmt = upd_stmt.where(sa.and_( CONSUMER_TBL.c.id == self.id, CONSUMER_TBL.c.generation == self.generation)) ctx.session.execute(upd_stmt) _update_in_db(self._context) def increment_generation(self): """Increments the consumer's generation. :raises placement.exception.ConcurrentUpdateDetected: if another thread updated the same consumer's view of its allocations in between the time when this object was originally read and the call which modified the consumer's state (e.g. replacing allocations for a consumer) """ consumer_gen = self.generation new_generation = consumer_gen + 1 upd_stmt = CONSUMER_TBL.update().where(sa.and_( CONSUMER_TBL.c.id == self.id, CONSUMER_TBL.c.generation == consumer_gen)).values( generation=new_generation) res = self._context.session.execute(upd_stmt) if res.rowcount != 1: raise exception.ConcurrentUpdateDetected self.generation = new_generation def delete(self): _delete_consumer(self._context, self) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/consumer_type.py0000664000175000017500000000441200000000000024505 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from placement.db.sqlalchemy import models from placement import db_api from placement import exception CONSUMER_TYPE_TBL = models.ConsumerType.__table__ _CONSUMER_TYPES_LOCK = 'consumer_types_sync' _CONSUMER_TYPES_SYNCED = False NULL_CONSUMER_TYPE_ALIAS = 'unknown' @db_api.placement_context_manager.writer def _create_in_db(ctx, name): db_obj = models.ConsumerType(name=name) try: db_obj.save(ctx.session) return db_obj except db_exc.DBDuplicateEntry: raise exception.ConsumerTypeExists(name=name) class ConsumerType(object): def __init__(self, context, id=None, name=None, updated_at=None, created_at=None): self._context = context self.id = id self.name = name self.updated_at = updated_at self.created_at = created_at @staticmethod def _from_db_object(ctx, target, source): target.id = source['id'] target.name = source['name'] target.created_at = source['created_at'] target.updated_at = source['updated_at'] target._context = ctx return target # NOTE(cdent): get_by_id and get_by_name are not currently used # but are left in place to indicate the smooth migration from # direct db access to using the AttributeCache. @classmethod def get_by_id(cls, ctx, id): return ctx.ct_cache.all_from_string(ctx.ct_cache.string_from_id(id)) @classmethod def get_by_name(cls, ctx, name): return ctx.ct_cache.all_from_string(name) def create(self): ct = _create_in_db(self._context, self.name) self._from_db_object(self._context, self, ct) self._context.ct_cache.clear() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/inventory.py0000664000175000017500000000661600000000000023656 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy as sa from placement.db.sqlalchemy import models from placement import db_api _INV_TBL = models.Inventory.__table__ class Inventory(object): # kwargs included because some constructors pass resource_class_id # but it is not used. def __init__(self, id=None, resource_provider=None, resource_class=None, total=None, reserved=0, min_unit=1, max_unit=1, step_size=1, allocation_ratio=1.0, updated_at=None, created_at=None, **kwargs): self.id = id self.resource_provider = resource_provider self.resource_class = resource_class self.total = total self.reserved = reserved self.min_unit = min_unit self.max_unit = max_unit self.step_size = step_size self.allocation_ratio = allocation_ratio self.updated_at = updated_at self.created_at = created_at @property def capacity(self): """Inventory capacity, adjusted by allocation_ratio.""" return int((self.total - self.reserved) * self.allocation_ratio) def find(inventories, res_class): """Return the inventory record from the list of Inventory records that matches the supplied resource class, or None. :param inventories: A list of Inventory objects. :param res_class: An integer or string representing a resource class. If the value is a string, the method first looks up the resource class identifier from the string. """ if not isinstance(res_class, str): raise ValueError for inv_rec in inventories: if inv_rec.resource_class == res_class: return inv_rec def get_all_by_resource_provider(context, rp): db_inv = _get_inventory_by_provider_id(context, rp.id) # Build up a list of Inventory objects, setting the Inventory object # fields to the same-named database record field we got from # _get_inventory_by_provider_id(). We already have the ResourceProvider # object so we just pass that object to the Inventory object # constructor as-is inv_list = [ Inventory( resource_provider=rp, resource_class=context.rc_cache.string_from_id( rec['resource_class_id']), **rec) for rec in db_inv ] return inv_list @db_api.placement_context_manager.reader def _get_inventory_by_provider_id(ctx, rp_id): inv = sa.alias(_INV_TBL, name="i") sel = sa.select( inv.c.resource_class_id, inv.c.total, inv.c.reserved, inv.c.min_unit, inv.c.max_unit, inv.c.step_size, inv.c.allocation_ratio, inv.c.updated_at, inv.c.created_at, ) sel = sel.where(inv.c.resource_provider_id == rp_id) return [dict(r._mapping) for r in ctx.session.execute(sel)] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/project.py0000664000175000017500000000605100000000000023260 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc import sqlalchemy as sa from placement.db.sqlalchemy import models from placement import db_api from placement import exception PROJECT_TBL = models.Project.__table__ @db_api.placement_context_manager.writer def ensure_incomplete_project(ctx): """Ensures that a project record is created for the "incomplete consumer project". Returns the internal ID of that record. """ incomplete_id = ctx.config.placement.incomplete_consumer_project_id sel = sa.select(PROJECT_TBL.c.id).where( PROJECT_TBL.c.external_id == incomplete_id) res = ctx.session.execute(sel).fetchone() if res: return res[0] ins = PROJECT_TBL.insert().values(external_id=incomplete_id) res = ctx.session.execute(ins) return res.inserted_primary_key[0] @db_api.placement_context_manager.reader def _get_project_by_external_id(ctx, external_id): projects = sa.alias(PROJECT_TBL, name="p") sel = sa.select( projects.c.id, projects.c.external_id, projects.c.updated_at, projects.c.created_at, ) sel = sel.where(projects.c.external_id == external_id) res = ctx.session.execute(sel).fetchone() if not res: raise exception.ProjectNotFound(external_id=external_id) return dict(res._mapping) class Project(object): def __init__(self, context, id=None, external_id=None, updated_at=None, created_at=None): self._context = context self.id = id self.external_id = external_id self.updated_at = updated_at self.created_at = created_at @staticmethod def _from_db_object(ctx, target, source): target._context = ctx target.id = source['id'] target.external_id = source['external_id'] target.updated_at = source['updated_at'] target.created_at = source['created_at'] return target @classmethod def get_by_external_id(cls, ctx, external_id): res = _get_project_by_external_id(ctx, external_id) return cls._from_db_object(ctx, cls(ctx), res) def create(self): @db_api.placement_context_manager.writer def _create_in_db(ctx): db_obj = models.Project(external_id=self.external_id) try: db_obj.save(ctx.session) except db_exc.DBDuplicateEntry: raise exception.ProjectExists(external_id=self.external_id) self._from_db_object(ctx, self, db_obj) _create_in_db(self._context) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/research_context.py0000664000175000017500000016355100000000000025163 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility methods for getting allocation candidates.""" import collections import copy import os_traits from oslo_log import log as logging import random import sqlalchemy as sa from sqlalchemy import sql from placement.db.sqlalchemy import models from placement import db_api from placement import exception from placement.objects import rp_candidates from placement.objects import trait as trait_obj # TODO(tetsuro): Move these public symbols in a central place. _ALLOC_TBL = models.Allocation.__table__ _INV_TBL = models.Inventory.__table__ _RP_TBL = models.ResourceProvider.__table__ _AGG_TBL = models.PlacementAggregate.__table__ _RP_AGG_TBL = models.ResourceProviderAggregate.__table__ _RP_TRAIT_TBL = models.ResourceProviderTrait.__table__ LOG = logging.getLogger(__name__) AnchorIds = collections.namedtuple( 'AnchorIds', 'rp_id rp_uuid anchor_id anchor_uuid') class RequestGroupSearchContext(object): """An adapter object that represents the search for allocation candidates for a single request group. """ def __init__(self, context, group, has_trees, sharing, suffix=''): """Initializes the object retrieving and caching matching providers for each conditions like resource and aggregates from database. :raises placement.exception.ResourceProviderNotFound if there is no provider found which satisfies the request. """ # TODO(tetsuro): split this into smaller functions reordering self.context = context # The request group suffix self.suffix = suffix # A dict, keyed by resource class internal ID, of the amounts of that # resource class being requested by the group. self.resources = {} # A set of string names of all resource classes requested by the group. self.rcs = set() for rc, amount in group.resources.items(): self.resources[context.rc_cache.id_from_string(rc)] = amount self.rcs.add(rc) # A list of lists of aggregate UUIDs that the providers matching for # that request group must be members of self.member_of = group.member_of # A list of aggregate UUIDs that the providers matching for # that request group must not be members of self.forbidden_aggs = group.forbidden_aggs # A set of provider ids that matches the requested positive aggregates self.rps_in_aggs = set() if self.member_of: self.rps_in_aggs = provider_ids_matching_aggregates( context, self.member_of) if not self.rps_in_aggs: LOG.debug('found no providers matching aggregates %s', self.member_of) raise exception.ResourceProviderNotFound() # If True, this RequestGroup represents requests which must be # satisfied by a single resource provider. If False, represents a # request for resources in any resource provider in the same tree, # or a sharing provider. self.use_same_provider = group.use_same_provider # Both required_trait_names and required_traits expresses the same # request with the same nested list of sets structure but # required_trait_names contains trait names while required_traits # contains trait internal IDs self.required_trait_names = group.required_traits # let's map the trait names to internal IDs this is useful for DB calls # expecting trait IDs. The structure of this field is the same as the # required_trait_names field. self.required_traits = [] # forbidden_traits is a dict mapping trait names to trait internal IDs self.forbidden_traits = {} for any_traits in group.required_traits: self.required_traits.append( set(trait_obj.ids_from_names(context, any_traits).values())) if group.forbidden_traits: self.forbidden_traits = trait_obj.ids_from_names( context, group.forbidden_traits) # Internal id of a root provider. If provided, this RequestGroup must # be satisfied by resource provider(s) under the root provider. self.tree_root_id = None if group.in_tree: tree_ids = provider_ids_from_uuid(context, group.in_tree) if tree_ids is None: LOG.debug("No provider found for in_tree%s=%s", suffix, group.in_tree) raise exception.ResourceProviderNotFound() self.tree_root_id = tree_ids.root_id LOG.debug("Group %s getting allocation candidates in the same " "tree with the root provider %s", self.suffix, tree_ids.root_uuid) self._rps_with_resource = {} for rc_id, amount in self.resources.items(): # NOTE(tetsuro): We could pass rps in requested aggregates to # get_providers_with_resource here once we explicitly put # aggregates to nested (non-root) providers (the aggregate # flows down feature) rather than applying later the implicit rule # that aggregate on root spans the whole tree rc_name = context.rc_cache.string_from_id(rc_id) LOG.debug('getting providers with %d %s', amount, rc_name) provs_with_resource = get_providers_with_resource( context, rc_id, amount, tree_root_id=self.tree_root_id) if not provs_with_resource: LOG.debug('found no providers with %d %s', amount, rc_name) raise exception.ResourceProviderNotFound() self._rps_with_resource[rc_id] = provs_with_resource # a set of resource provider IDs that share some inventory for some # resource class. self._sharing_providers = sharing # bool indicating there is some level of nesting in the environment self.has_trees = has_trees @property def exists_sharing(self): """bool indicating there is sharing providers in the environment for the requested resource class (if there isn't, we take faster, simpler code paths) """ # NOTE: This could be refactored to see the requested resources return bool(self._sharing_providers) @property def exists_nested(self): """bool indicating there is some level of nesting in the environment (if there isn't, we take faster, simpler code paths) """ # NOTE: This could be refactored to see the requested resources return self.has_trees def get_rps_with_shared_capacity(self, rc_id): sharing_in_aggs = self._sharing_providers if self.rps_in_aggs: sharing_in_aggs &= self.rps_in_aggs if not sharing_in_aggs: return set() rps_with_resource = set(p[0] for p in self._rps_with_resource[rc_id]) return sharing_in_aggs & rps_with_resource def get_rps_with_resource(self, rc_id): return self._rps_with_resource.get(rc_id) class RequestWideSearchContext(object): """An adapter object that represents the search for allocation candidates for a request-wide parameters. """ def __init__(self, context, rqparams, nested_aware): """Create a RequestWideSearchContext. :param context: placement.context.RequestContext object :param rqparams: A RequestWideParams. :param nested_aware: Boolean, True if we are at a microversion that supports trees; False otherwise. """ self._ctx = context self._limit = rqparams.limit self.group_policy = rqparams.group_policy self._nested_aware = nested_aware self.has_trees = _has_provider_trees(context) # This is set up by _process_anchor_* below. It remains None if no # anchor filters were requested. Otherwise it becomes a set of internal # IDs of root providers that conform to the requested filters. self.anchor_root_ids = None self._process_anchor_traits(rqparams) self.same_subtrees = rqparams.same_subtrees # A dict, keyed by resource provider id of ProviderSummary objects. # Used as a cache of ProviderSummaries created in this request to # avoid duplication. self.summaries_by_id = {} # A set of resource classes that were requested in more than one group self.multi_group_rcs = set() # A mapping of resource provider uuid to parent provider uuid, used # when merging allocation candidates. self.parent_uuid_by_rp_uuid = {} # Dict mapping (resource provier uuid, resource class name) to a # ProviderSummaryResource. Used during _exceeds_capacity in # _merge_candidates. self.psum_res_by_rp_rc = {} def _process_anchor_traits(self, rqparams): """Set or filter self.anchor_root_ids according to anchor required/forbidden traits. :param rqparams: RequestWideParams. :raises TraitNotFound: If any named trait does not exist in the database. :raises ResourceProviderNotFound: If anchor trait filters were specified, but we find no matching providers. """ required, forbidden = ( rqparams.anchor_required_traits, rqparams.anchor_forbidden_traits) if not (required or forbidden): return required_ids = set(trait_obj.ids_from_names( self._ctx, required).values()) if required else None forbidden_ids = set(trait_obj.ids_from_names( self._ctx, forbidden).values()) if forbidden else None self.anchor_root_ids = _get_roots_with_traits( self._ctx, required_ids, forbidden_ids) if not self.anchor_root_ids: LOG.debug('found no providers satisfying required traits: %s and ' 'forbidden traits: %s', required, forbidden) raise exception.ResourceProviderNotFound() def in_filtered_anchors(self, anchor_root_id): """Returns whether anchor_root_id is present in filtered anchors. (If we don't have filtered anchors, that implicitly means "all possible anchors", so we return True.) """ if self.anchor_root_ids is None: # Not filtering anchors return True return anchor_root_id in self.anchor_root_ids def exclude_nested_providers( self, allocation_requests, provider_summaries): """Exclude allocation requests and provider summaries for old microversions if they involve more than one provider from the same tree. """ if self._nested_aware or not self.has_trees: return allocation_requests, provider_summaries filtered_areqs = [] all_rp_uuids = set() for a_req in allocation_requests: root_by_rp = { arr.resource_provider.uuid: arr.resource_provider.root_provider_uuid for arr in a_req.resource_requests} # If more than one allocation is provided by the same tree, # we need to skip that allocation request. if len(root_by_rp) == len(set(root_by_rp.values())): filtered_areqs.append(a_req) all_rp_uuids |= set(root_by_rp) # Exclude eliminated providers from the provider summaries. filtered_summaries = [ps for ps in provider_summaries if ps.resource_provider.uuid in all_rp_uuids] LOG.debug( 'Excluding nested providers yields %d allocation requests and ' '%d provider summaries', len(filtered_areqs), len(filtered_summaries)) return filtered_areqs, filtered_summaries def limit_results(self, alloc_request_objs, summary_objs): # Limit the number of allocation request objects. We do this after # creating all of them so that we can do a random slice without # needing to mess with complex sql or add additional columns to the DB. if self._limit and self._limit < len(alloc_request_objs): if self._ctx.config.placement.randomize_allocation_candidates: alloc_request_objs = random.sample( alloc_request_objs, self._limit) else: alloc_request_objs = alloc_request_objs[:self._limit] # Limit summaries to only those mentioned in the allocation reqs. kept_summary_objs = [] alloc_req_root_uuids = set() # Extract root resource provider uuids from the resource requests. for aro in alloc_request_objs: for arr in aro.resource_requests: alloc_req_root_uuids.add( arr.resource_provider.root_provider_uuid) for summary in summary_objs: rp_root_uuid = summary.resource_provider.root_provider_uuid # Skip a summary if we are limiting and haven't selected an # allocation request that uses the resource provider. if rp_root_uuid not in alloc_req_root_uuids: continue kept_summary_objs.append(summary) summary_objs = kept_summary_objs LOG.debug('Limiting results yields %d allocation requests and ' '%d provider summaries', len(alloc_request_objs), len(summary_objs)) elif self._ctx.config.placement.randomize_allocation_candidates: random.shuffle(alloc_request_objs) return alloc_request_objs, summary_objs def copy_arr_if_needed(self, arr): """Copy or return arr, depending on the search context. In cases with group_policy=none where multiple groups request amounts from the same resource class, we end up using the same AllocationRequestResource more than once when consolidating. So we need to make a copy so we don't overwrite the one used for a different result. But as an optimization, since this copy is not cheap, we don't do it unless it's necessary. :param arr: An AllocationRequestResource to be returned or copied and returned. :return: arr or a copy thereof. """ if self.group_policy != 'none': return arr if arr.resource_class in self.multi_group_rcs: return copy.copy(arr) return arr def exceeds_capacity(self, areq): """Checks a (consolidated) AllocationRequest against the provider summaries to ensure that it does not exceed capacity. Exceeding capacity can mean the total amount (already used plus this allocation) exceeds the total inventory amount; or this allocation exceeds the max_unit in the inventory record. :param areq: An AllocationRequest produced by the `_consolidate_allocation_requests` method. :return: True if areq exceeds capacity; False otherwise. """ for arr in areq.resource_requests: key = (arr.resource_provider.id, arr.resource_class) psum_res = self.psum_res_by_rp_rc[key] if psum_res.used + arr.amount > psum_res.capacity: LOG.debug('Excluding the following AllocationRequest because ' 'used (%d) + amount (%d) > capacity (%d) for ' 'resource class %s: %s', psum_res.used, arr.amount, psum_res.capacity, arr.resource_class, str(areq)) return True if arr.amount > psum_res.max_unit: LOG.debug('Excluding the following AllocationRequest because ' 'amount (%d) > max_unit (%d) for resource class ' '%s: %s', arr.amount, psum_res.max_unit, arr.resource_class, str(areq)) return True return False @property def config(self): return self._ctx.config @db_api.placement_context_manager.reader def provider_ids_from_uuid(context, uuid): """Given the UUID of a resource provider, returns a sqlalchemy object with the internal ID, the UUID, the parent provider's internal ID, parent provider's UUID, the root provider's internal ID and the root provider UUID. :returns: sqlalchemy object containing the internal IDs and UUIDs of the provider identified by the supplied UUID :param uuid: The UUID of the provider to look up """ # SELECT # rp.id, rp.uuid, # parent.id AS parent_id, parent.uuid AS parent_uuid, # root.id AS root_id, root.uuid AS root_uuid # FROM resource_providers AS rp # INNER JOIN resource_providers AS root # ON rp.root_provider_id = root.id # LEFT JOIN resource_providers AS parent # ON rp.parent_provider_id = parent.id me = sa.alias(_RP_TBL, name="me") parent = sa.alias(_RP_TBL, name="parent") root = sa.alias(_RP_TBL, name="root") cols = [ me.c.id, me.c.uuid, parent.c.id.label('parent_id'), parent.c.uuid.label('parent_uuid'), root.c.id.label('root_id'), root.c.uuid.label('root_uuid'), ] me_to_root = sa.join(me, root, me.c.root_provider_id == root.c.id) me_to_parent = sa.outerjoin( me_to_root, parent, me.c.parent_provider_id == parent.c.id) sel = sa.select(*cols).select_from(me_to_parent) sel = sel.where(me.c.uuid == uuid) res = context.session.execute(sel).fetchone() if not res: return None return res def _usage_select(rc_ids): usage = sa.select( _ALLOC_TBL.c.resource_provider_id, _ALLOC_TBL.c.resource_class_id, sql.func.sum(_ALLOC_TBL.c.used).label('used') ).where( _ALLOC_TBL.c.resource_class_id.in_(rc_ids) ).group_by( _ALLOC_TBL.c.resource_provider_id, _ALLOC_TBL.c.resource_class_id, ) return usage.subquery(name='usage') def _capacity_check_clause(amount, usage, inv_tbl=_INV_TBL): return sa.and_( sql.func.coalesce(usage.c.used, 0) + amount <= ( (inv_tbl.c.total - inv_tbl.c.reserved) * inv_tbl.c.allocation_ratio), inv_tbl.c.min_unit <= amount, inv_tbl.c.max_unit >= amount, amount % inv_tbl.c.step_size == 0, ) @db_api.placement_context_manager.reader def get_providers_with_resource(ctx, rc_id, amount, tree_root_id=None): """Returns a set of tuples of (provider ID, root provider ID) of providers that satisfy the request for a single resource class. :param ctx: Session context to use :param rc_id: Internal ID of resource class to check inventory for :param amount: Amount of resource being requested :param tree_root_id: An optional root provider ID. If provided, the results are limited to the resource providers under the given root resource provider. """ # SELECT rp.id, rp.root_provider_id # FROM resource_providers AS rp # JOIN inventories AS inv # ON rp.id = inv.resource_provider_id # AND inv.resource_class_id = $RC_ID # LEFT JOIN ( # SELECT # allocs.resource_provider_id, # SUM(allocs.used) AS used # FROM allocations AS allocs # WHERE allocs.resource_class_id = $RC_ID # GROUP BY allocs.resource_provider_id # ) AS usaged # ON inv.resource_provider_id = usaged.resource_provider_id # WHERE # used + $AMOUNT <= ((total - reserved) * inv.allocation_ratio) # AND inv.min_unit <= $AMOUNT # AND inv.max_unit >= $AMOUNT # AND $AMOUNT % inv.step_size = 0 # # If tree_root_id specified: # AND rp.root_provider_id == $tree_root_id rpt = sa.alias(_RP_TBL, name="rp") inv = sa.alias(_INV_TBL, name="inv") usage = _usage_select([rc_id]) rp_to_inv = sa.join( rpt, inv, sa.and_( rpt.c.id == inv.c.resource_provider_id, inv.c.resource_class_id == rc_id)) inv_to_usage = sa.outerjoin( rp_to_inv, usage, inv.c.resource_provider_id == usage.c.resource_provider_id) sel = sa.select(rpt.c.id, rpt.c.root_provider_id) sel = sel.select_from(inv_to_usage) where_conds = _capacity_check_clause(amount, usage, inv_tbl=inv) if tree_root_id is not None: where_conds = sa.and_( rpt.c.root_provider_id == tree_root_id, where_conds) sel = sel.where(where_conds) res = ctx.session.execute(sel).fetchall() res = set((r[0], r[1]) for r in res) return res @db_api.placement_context_manager.reader def get_providers_with_root(ctx, allowed, forbidden): """Returns a set of tuples of (provider ID, root provider ID) of given resource providers :param ctx: Session context to use :param allowed: resource provider ids to include :param forbidden: resource provider ids to exclude """ # SELECT rp.id, rp.root_provider_id # FROM resource_providers AS rp # WHERE rp.id IN ($allowed) # AND rp.id NOT IN ($forbidden) sel = sa.select(_RP_TBL.c.id, _RP_TBL.c.root_provider_id) sel = sel.select_from(_RP_TBL) cond = [] if allowed: cond.append(_RP_TBL.c.id.in_(allowed)) if forbidden: cond.append(~_RP_TBL.c.id.in_(forbidden)) if cond: sel = sel.where(sa.and_(*cond)) res = ctx.session.execute(sel).fetchall() res = set((r[0], r[1]) for r in res) return res @db_api.placement_context_manager.reader def get_provider_ids_matching(rg_ctx): """Returns a list of tuples of (internal provider ID, root provider ID) that have available inventory to satisfy all the supplied requests for resources. If no providers match, the empty list is returned. :note: This function is used to get results for (a) a RequestGroup with use_same_provider=True in a granular request, or (b) a short cut path for scenarios that do NOT involve sharing or nested providers. Each `internal provider ID` represents a *single* provider that can satisfy *all* of the resource/trait/aggregate criteria. This is in contrast with get_trees_matching_all(), where each provider might only satisfy *some* of the resources, the rest of which are satisfied by other providers in the same tree or shared via aggregate. :param rg_ctx: RequestGroupSearchContext """ filtered_rps, forbidden_rp_ids = get_provider_ids_for_traits_and_aggs( rg_ctx) if filtered_rps is None: # If no providers match the traits/aggs, we can short out return [] # Instead of constructing a giant complex SQL statement that joins multiple # copies of derived usage tables and inventory tables to each other, we do # one query for each requested resource class. This allows us to log a # rough idea of which resource class query returned no results (for # purposes of rough debugging of a single allocation candidates request) as # well as reduce the necessary knowledge of SQL in order to understand the # queries being executed here. provs_with_resource = set() first = True for rc_id, amount in rg_ctx.resources.items(): rc_name = rg_ctx.context.rc_cache.string_from_id(rc_id) provs_with_resource = rg_ctx.get_rps_with_resource(rc_id) LOG.debug("found %d providers with available %d %s", len(provs_with_resource), amount, rc_name) if not provs_with_resource: return [] rc_rp_ids = set(p[0] for p in provs_with_resource) # The branching below could be collapsed code-wise, but is in place to # make the debug logging clearer. if first: first = False if filtered_rps: filtered_rps &= rc_rp_ids LOG.debug("found %d providers after applying initial " "aggregate and trait filters", len(filtered_rps)) else: filtered_rps = rc_rp_ids # The following condition is not necessary for the logic; just # prevents the message from being logged unnecessarily. if forbidden_rp_ids: # Forbidden trait/aggregate filters only need to be applied # a) on the first iteration; and # b) if not already set up before the loop # ...since any providers in the resulting set are the basis # for intersections, and providers with forbidden traits # are already absent from that set after we've filtered # them once. filtered_rps -= forbidden_rp_ids LOG.debug("found %d providers after applying forbidden " "traits/aggregates", len(filtered_rps)) else: filtered_rps &= rc_rp_ids LOG.debug("found %d providers after filtering by previous result", len(filtered_rps)) if not filtered_rps: return [] if not rg_ctx.resources: # NOTE(tetsuro): This does an extra sql query that could be avoided if # all the smaller queries in get_provider_ids_for_traits_and_aggs() # would return the internal ID and the root ID as well for each RP. provs_with_resource = get_providers_with_root( rg_ctx.context, filtered_rps, forbidden_rp_ids) # provs_with_resource will contain a superset of providers with IDs still # in our filtered_rps set. We return the list of tuples of # (internal provider ID, root internal provider ID) return [rpids for rpids in provs_with_resource if rpids[0] in filtered_rps] @db_api.placement_context_manager.reader def get_trees_matching_all(rg_ctx, rw_ctx): """Returns a RPCandidates object representing the providers that satisfy the request for resources. If traits are also required, this function only returns results where the set of providers within a tree that satisfy the resource request collectively have all the required traits associated with them. This means that given the following provider tree: cn1 | --> pf1 (SRIOV_NET_VF:2) | --> pf2 (SRIOV_NET_VF:1, HW_NIC_OFFLOAD_GENEVE) If a user requests 1 SRIOV_NET_VF resource and no required traits will return both pf1 and pf2. However, a request for 2 SRIOV_NET_VF and required trait of HW_NIC_OFFLOAD_GENEVE will return no results (since pf1 is the only provider with enough inventory of SRIOV_NET_VF but it does not have the required HW_NIC_OFFLOAD_GENEVE trait). :note: This function is used for scenarios to get results for a RequestGroup with use_same_provider=False. In this scenario, we are able to use multiple providers within the same provider tree including sharing providers to satisfy different resources involved in a single RequestGroup. :param rg_ctx: RequestGroupSearchContext :param rw_ctx: RequestWideSearchContext """ if rg_ctx.forbidden_aggs: rps_bad_aggs = provider_ids_matching_aggregates( rg_ctx.context, [rg_ctx.forbidden_aggs]) # To get all trees that collectively have all required resource, # aggregates and traits, we use `RPCandidateList` which has a list of # three-tuples with the first element being resource provider ID, the # second element being the root provider ID and the third being resource # class ID. provs_with_inv = rp_candidates.RPCandidateList() for rc_id, amount in rg_ctx.resources.items(): rc_name = rg_ctx.context.rc_cache.string_from_id(rc_id) provs_with_inv_rc = rp_candidates.RPCandidateList() rc_provs_with_inv = rg_ctx.get_rps_with_resource(rc_id) provs_with_inv_rc.add_rps(rc_provs_with_inv, rc_id) LOG.debug("found %d providers under %d trees with available %d %s", len(provs_with_inv_rc), len(provs_with_inv_rc.trees), amount, rc_name) if not provs_with_inv_rc: # If there's no providers that have one of the resource classes, # then we can short-circuit returning an empty RPCandidateList return rp_candidates.RPCandidateList() sharing_providers = rg_ctx.get_rps_with_shared_capacity(rc_id) if sharing_providers and rg_ctx.tree_root_id is None: # There are sharing providers for this resource class, so we # should also get combinations of (sharing provider, anchor root) # in addition to (non-sharing provider, anchor root) we've just # got via get_providers_with_resource() above. We must skip this # process if tree_root_id is provided via the ?in_tree= # queryparam, because it restricts resources from another tree. anchors = anchors_for_sharing_providers( rg_ctx.context, sharing_providers) rc_provs_with_inv = set( (anchor.rp_id, anchor.anchor_id) for anchor in anchors) provs_with_inv_rc.add_rps(rc_provs_with_inv, rc_id) LOG.debug( "considering %d sharing providers with %d %s, " "now we've got %d provider trees", len(sharing_providers), amount, rc_name, len(provs_with_inv_rc.trees)) # If we have a list of viable anchor roots, filter to those if rw_ctx.anchor_root_ids: provs_with_inv_rc.filter_by_tree(rw_ctx.anchor_root_ids) LOG.debug( "found %d providers under %d trees after applying anchor root " "filter", len(provs_with_inv_rc.rps), len(provs_with_inv_rc.trees)) # If that left nothing, we're done if not provs_with_inv_rc: return rp_candidates.RPCandidateList() if rg_ctx.member_of: # Aggregate on root spans the whole tree, so the rp itself # *or its root* should be in the aggregate provs_with_inv_rc.filter_by_rp_or_tree(rg_ctx.rps_in_aggs) LOG.debug("found %d providers under %d trees after applying " "aggregate filter %s", len(provs_with_inv_rc.rps), len(provs_with_inv_rc.trees), rg_ctx.member_of) if not provs_with_inv_rc: # Short-circuit returning an empty RPCandidateList return rp_candidates.RPCandidateList() if rg_ctx.forbidden_aggs: # Aggregate on root spans the whole tree, so the rp itself # *and its root* should be outside the aggregate provs_with_inv_rc.filter_by_rp_nor_tree(rps_bad_aggs) LOG.debug("found %d providers under %d trees after applying " "negative aggregate filter %s", len(provs_with_inv_rc.rps), len(provs_with_inv_rc.trees), rg_ctx.forbidden_aggs) if not provs_with_inv_rc: # Short-circuit returning an empty RPCandidateList return rp_candidates.RPCandidateList() # Adding the resource providers we've got for this resource class, # filter provs_with_inv to have only trees with enough inventories # for this resource class. Here "tree" includes sharing providers # in its terminology provs_with_inv.merge_common_trees(provs_with_inv_rc) LOG.debug( "found %d providers under %d trees after filtering by " "previous result", len(provs_with_inv.rps), len(provs_with_inv.trees)) if not provs_with_inv: return rp_candidates.RPCandidateList() if (not rg_ctx.required_traits and not rg_ctx.forbidden_traits) or ( rg_ctx.exists_sharing): # If there were no traits required, there's no difference in how we # calculate allocation requests between nested and non-nested # environments, so just short-circuit and return. Or if sharing # providers are in play, we check the trait constraints later # in _alloc_candidates_multiple_providers(), so skip. return provs_with_inv # Return the providers where the providers have the available inventory # capacity and that set of providers (grouped by their tree) have all # of the required traits and none of the forbidden traits rp_tuples_with_trait = _get_trees_with_traits( rg_ctx.context, provs_with_inv.rps, rg_ctx.required_traits, rg_ctx.forbidden_traits) provs_with_inv.filter_by_rp(rp_tuples_with_trait) LOG.debug("found %d providers under %d trees after applying " "traits filter - required: %s, forbidden: %s", len(provs_with_inv.rps), len(provs_with_inv.trees), list(rg_ctx.required_trait_names), list(rg_ctx.forbidden_traits)) return provs_with_inv @db_api.placement_context_manager.reader def _get_trees_with_traits(ctx, rp_ids, required_traits, forbidden_traits): """Given a list of provider IDs, filter them to return a set of tuples of (provider ID, root provider ID) of providers which belong to a tree that can satisfy trait requirements. This returns trees that still have the possibility to be a match according to the required and forbidden traits. It returns every rp from the tree that is in rp_ids, even if some of those rps are providing forbidden traits. This filters out a whole tree if either: * every RPs of the tree from rp_ids having a forbidden trait (see test_get_trees_with_traits_forbidden_1 and _2) * there is a required trait that none of the RPs of the tree from rp_ids provide (see test_get_trees_with_traits) or there is an RP providing the required trait but that also provides a forbidden trait (see test_get_trees_with_traits_forbidden_3) The returned tree still might not be a valid tree as this function returns a tree even if some providers need to be ignored due to forbidden traits. So if those RPs are needed from resource perspective then the tree will be filtered out later by objects.allocation_candidate._check_traits_for_alloc_request :param ctx: Session context to use :param rp_ids: a set of resource provider IDs :param required_traits: A list of set of trait internal IDs where the traits in each nested set are OR'd while the items in the outer list are AND'd together. The RPs in the tree should COLLECTIVELY fulfill this trait request. :param forbidden_traits: A list of trait internal IDs that a resource provider tree must not have. """ # TODO(gibi): if somebody can formulate the below three SQL query to a # single one then probably that will improve performance # Get the root of all rps in the rp_ids as we need to return every rp from # rp_ids that is in a matching tree but below we will filter out rps by # traits. So we need a copy and also that copy needs to associate rps to # trees by root_id rpt = sa.alias(_RP_TBL, name='rpt') sel = sa.select(rpt.c.id, rpt.c.root_provider_id).select_from(rpt) sel = sel.where(rpt.c.id.in_(rp_ids)) res = ctx.session.execute(sel).fetchall() original_rp_ids = {rp_id: root_id for rp_id, root_id in res} # First filter out the rps from the rp_ids list that provide forbidden # traits. To do that we collect those rps that provide any of the forbidden # traits and with the outer join and the null check we filter them out # of the result rptt_forbidden = sa.alias(_RP_TRAIT_TBL, name="rptt_forbidden") rp_to_trait = sa.outerjoin( rpt, rptt_forbidden, sa.and_( rpt.c.id == rptt_forbidden.c.resource_provider_id, rptt_forbidden.c.trait_id.in_(forbidden_traits) ) ) sel = sa.select(rpt.c.id, rpt.c.root_provider_id).select_from(rp_to_trait) sel = sel.where( sa.and_( rpt.c.id.in_(original_rp_ids.keys()), rptt_forbidden.c.trait_id == sa.null() ) ) res = ctx.session.execute(sel).fetchall() # These are the rps that does not provide any forbidden traits good_rp_ids = {} for rp_id, root_id in res: good_rp_ids[rp_id] = root_id # shortcut if no traits required the good_rp_ids.values() contains all the # good roots if not required_traits: return { (rp_id, root_id) for rp_id, root_id in original_rp_ids.items() if root_id in good_rp_ids.values() } # now get the traits provided by the good rps per tree rptt = sa.alias(_RP_TRAIT_TBL, name="rptt") rp_to_trait = sa.join( rpt, rptt, rpt.c.id == rptt.c.resource_provider_id) sel = sa.select( rpt.c.root_provider_id, rptt.c.trait_id ).select_from(rp_to_trait) sel = sel.where(rpt.c.id.in_(good_rp_ids)) res = ctx.session.execute(sel).fetchall() root_to_traits = collections.defaultdict(set) for root_id, trait_id in res: root_to_traits[root_id].add(trait_id) result = set() # filter the trees by checking if each tree provides all the # required_traits for root_id, provided_traits in root_to_traits.items(): # we need a match for all the items from the outer list of the # required_traits as that describes AND relationship if all( # we need at least one match per nested trait set as that set # describes OR relationship any_traits.intersection(provided_traits) for any_traits in required_traits ): # This tree is matching the required traits so add result all the # rps from the original rp_ids that belongs to this tree result.update( { (rp_id, root_id) for rp_id, original_root_id in original_rp_ids.items() if root_id == original_root_id } ) return result @db_api.placement_context_manager.reader def _get_roots_with_traits(ctx, required_traits, forbidden_traits): """Return a set of IDs of root providers (NOT trees) that can satisfy trait requirements. At least one of ``required_traits`` or ``forbidden_traits`` is required. :param ctx: Session context to use :param required_traits: A set of required trait internal IDs that each root provider must have associated with it. :param forbidden_traits: A set of trait internal IDs that each root provider must not have. :returns: A set of internal IDs of root providers that satisfy the specified trait requirements. The empty set if no roots match. :raises ValueError: If required_traits and forbidden_traits are both empty/ None. """ if not (required_traits or forbidden_traits): raise ValueError("At least one of required_traits or forbidden_traits " "is required.") # The SQL we want looks like this: # # SELECT rp.id FROM resource_providers AS rp rpt = sa.alias(_RP_TBL, name="rp") sel = sa.select(rpt.c.id) # WHERE rp.parent_provider_id IS NULL cond = [rpt.c.parent_provider_id.is_(None)] subq_join = None # TODO(efried): DRY traits subquery with _get_trees_with_traits # # Only if we have required traits... if required_traits: # INNER JOIN resource_provider_traits AS rptt # ON rp.id = rptt.resource_provider_id # AND rptt.trait_id IN ($REQUIRED_TRAIT_IDS) rptt = sa.alias(_RP_TRAIT_TBL, name="rptt") rpt_to_rptt = sa.join( rpt, rptt, sa.and_( rpt.c.id == rptt.c.resource_provider_id, rptt.c.trait_id.in_(required_traits))) subq_join = rpt_to_rptt # Only get the resource providers that have ALL the required traits, # so we need to GROUP BY the provider id and ensure that the # COUNT(trait_id) is equal to the number of traits we are requiring num_traits = len(required_traits) having_cond = sa.func.count(sa.distinct(rptt.c.trait_id)) == num_traits sel = sel.having(having_cond) # # Only if we have forbidden_traits... if forbidden_traits: # LEFT JOIN resource_provider_traits AS rptt_forbid rptt_forbid = sa.alias(_RP_TRAIT_TBL, name="rptt_forbid") join_to = rpt if subq_join is not None: join_to = subq_join rpt_to_rptt_forbid = sa.outerjoin( # ON rp.id = rptt_forbid.resource_provider_id # AND rptt_forbid.trait_id IN ($FORBIDDEN_TRAIT_IDS) join_to, rptt_forbid, sa.and_( rpt.c.id == rptt_forbid.c.resource_provider_id, rptt_forbid.c.trait_id.in_(forbidden_traits))) # AND rptt_forbid.resource_provider_id IS NULL cond.append(rptt_forbid.c.resource_provider_id.is_(None)) subq_join = rpt_to_rptt_forbid sel = sel.select_from(subq_join).where(sa.and_(*cond)).group_by(rpt.c.id) return set(row[0] for row in ctx.session.execute(sel).fetchall()) @db_api.placement_context_manager.reader def provider_ids_matching_aggregates(context, member_of, rp_ids=None): """Given a list of lists of aggregate UUIDs, return the internal IDs of all resource providers associated with the aggregates. :param member_of: A list containing lists of aggregate UUIDs. Each item in the outer list is to be AND'd together. If that item contains multiple values, they are OR'd together. For example, if member_of is:: [ ['agg1'], ['agg2', 'agg3'], ] we will return all the resource providers that are associated with agg1 as well as either (agg2 or agg3) :param rp_ids: When present, returned resource providers are limited to only those in this value :returns: A set of internal resource provider IDs having all required aggregate associations """ # Given a request for the following: # # member_of = [ # [agg1], # [agg2], # [agg3, agg4] # ] # # we need to produce the following SQL expression: # # SELECT # rp.id # FROM resource_providers AS rp # JOIN resource_provider_aggregates AS rpa1 # ON rp.id = rpa1.resource_provider_id # AND rpa1.aggregate_id IN ($AGG1_ID) # JOIN resource_provider_aggregates AS rpa2 # ON rp.id = rpa2.resource_provider_id # AND rpa2.aggregate_id IN ($AGG2_ID) # JOIN resource_provider_aggregates AS rpa3 # ON rp.id = rpa3.resource_provider_id # AND rpa3.aggregate_id IN ($AGG3_ID, $AGG4_ID) # # Only if we have rp_ids... # WHERE rp.id IN ($RP_IDs) # First things first, get a map of all the aggregate UUID to internal # aggregate IDs agg_uuids = set() for members in member_of: for member in members: agg_uuids.add(member) agg_tbl = sa.alias(_AGG_TBL, name='aggs') agg_sel = sa.select(agg_tbl.c.uuid, agg_tbl.c.id) agg_sel = agg_sel.where(agg_tbl.c.uuid.in_(agg_uuids)) agg_uuid_map = { r[0]: r[1] for r in context.session.execute(agg_sel).fetchall() } rp_tbl = sa.alias(_RP_TBL, name='rp') join_chain = rp_tbl for x, members in enumerate(member_of): rpa_tbl = sa.alias(_RP_AGG_TBL, name='rpa%d' % x) agg_ids = [agg_uuid_map[member] for member in members if member in agg_uuid_map] if not agg_ids: # This member_of list contains only non-existent aggregate UUIDs # and therefore we will always return 0 results, so short-circuit return set() join_cond = sa.and_( rp_tbl.c.id == rpa_tbl.c.resource_provider_id, rpa_tbl.c.aggregate_id.in_(agg_ids)) join_chain = sa.join(join_chain, rpa_tbl, join_cond) sel = sa.select(rp_tbl.c.id).select_from(join_chain) if rp_ids: sel = sel.where(rp_tbl.c.id.in_(rp_ids)) return set(r[0] for r in context.session.execute(sel)) @db_api.placement_context_manager.reader def provider_ids_matching_required_traits( context, required_traits, rp_ids=None ): """Given a list of set of trait internal IDs, return the internal IDs of all resource providers that individually satisfy the requested traits. :param context: The request context :param required_traits: A non-empty list containing sets of trait IDs. Each item in the outer list is to be AND'd together. If that item contains multiple values, they are OR'd together. For example, if required is:: [ {'trait1ID'}, {'trait2ID', 'trait3ID'}, ] we will return all the resource providers that has trait1 and either trait2 or trait3. :param rp_ids: When present, returned resource providers are limited to only those in this value :returns: A set of internal resource provider IDs having all required traits """ if not required_traits: raise ValueError('required_traits must not be empty') # Given a request for the following: # # required = [ # {trait1}, # {trait2}, # {trait3, trait4} # ] # # we need to produce the following SQL expression: # # SELECT # rp.id # FROM resource_providers AS rp # JOIN resource_provider_traits AS rpt1 # ON rp.id = rpt1.resource_provider_id # AND rpt1.trait_id IN ($TRAIT1_ID) # JOIN resource_provider_traits AS rpt2 # ON rp.id = rpt2.resource_provider_id # AND rpt2.trait_id IN ($TRAIT2_ID) # JOIN resource_provider_traits AS rpt3 # ON rp.id = rpt3.resource_provider_id # AND rpt3.trait_id IN ($TRAIT3_ID, $TRAIT4_ID) # # Only if we have rp_ids... # WHERE rp.id IN ($RP_IDs) rp_tbl = sa.alias(_RP_TBL, name='rp') join_chain = rp_tbl for x, any_traits in enumerate(required_traits): rpt_tbl = sa.alias(_RP_TRAIT_TBL, name='rpt%d' % x) join_cond = sa.and_( rp_tbl.c.id == rpt_tbl.c.resource_provider_id, rpt_tbl.c.trait_id.in_(any_traits)) join_chain = sa.join(join_chain, rpt_tbl, join_cond) sel = sa.select(rp_tbl.c.id).select_from(join_chain) if rp_ids: sel = sel.where(rp_tbl.c.id.in_(rp_ids)) return set(r[0] for r in context.session.execute(sel)) @db_api.placement_context_manager.reader def get_provider_ids_having_any_trait(ctx, traits): """Returns a set of resource provider internal IDs that individually have ANY of the supplied traits. :param ctx: Session context to use :param traits: An iterable of trait internal IDs, at least one of which each provider must have associated with it. :raise ValueError: If traits is empty or None. """ if not traits: raise ValueError('traits must not be empty') rptt = sa.alias(_RP_TRAIT_TBL, name="rpt") sel = sa.select(rptt.c.resource_provider_id) sel = sel.where(rptt.c.trait_id.in_(traits)) sel = sel.group_by(rptt.c.resource_provider_id) return set(r[0] for r in ctx.session.execute(sel)) def get_provider_ids_for_traits_and_aggs(rg_ctx): """Get internal IDs for all providers matching the specified traits/aggs. :return: A tuple of: filtered_rp_ids: A set of internal provider IDs matching the specified criteria. If None, work was done and resulted in no matching providers. This is in contrast to the empty set, which indicates that no filtering was performed. forbidden_rp_ids: A set of internal IDs of providers having any of the specified forbidden_traits. """ filtered_rps = set() if rg_ctx.required_traits: trait_rps = provider_ids_matching_required_traits( rg_ctx.context, rg_ctx.required_traits) filtered_rps = trait_rps LOG.debug("found %d providers after applying required traits filter " "(%s)", len(filtered_rps), list(rg_ctx.required_trait_names)) if not filtered_rps: return None, [] # If 'member_of' has values, do a separate lookup to identify the # resource providers that meet the member_of constraints. if rg_ctx.member_of: if filtered_rps: filtered_rps &= rg_ctx.rps_in_aggs else: filtered_rps = rg_ctx.rps_in_aggs LOG.debug("found %d providers after applying required aggregates " "filter (%s)", len(filtered_rps), rg_ctx.member_of) if not filtered_rps: return None, [] forbidden_rp_ids = set() if rg_ctx.forbidden_aggs: rps_bad_aggs = provider_ids_matching_aggregates( rg_ctx.context, [rg_ctx.forbidden_aggs]) forbidden_rp_ids |= rps_bad_aggs if filtered_rps: filtered_rps -= rps_bad_aggs LOG.debug("found %d providers after applying forbidden aggregates " "filter (%s)", len(filtered_rps), rg_ctx.forbidden_aggs) if not filtered_rps: return None, [] if rg_ctx.forbidden_traits: rps_bad_traits = get_provider_ids_having_any_trait( rg_ctx.context, rg_ctx.forbidden_traits.values()) forbidden_rp_ids |= rps_bad_traits if filtered_rps: filtered_rps -= rps_bad_traits LOG.debug("found %d providers after applying forbidden traits " "filter (%s)", len(filtered_rps), list(rg_ctx.forbidden_traits)) if not filtered_rps: return None, [] return filtered_rps, forbidden_rp_ids @db_api.placement_context_manager.reader def get_sharing_providers(ctx, rp_ids=None): """Returns a set of resource provider IDs (internal IDs, not UUIDs) that indicate that they share resource via an aggregate association. Shared resource providers are marked with a standard trait called MISC_SHARES_VIA_AGGREGATE. This indicates that the provider allows its inventory to be consumed by other resource providers associated via an aggregate link. For example, assume we have two compute nodes, CN_1 and CN_2, each with inventory of VCPU and MEMORY_MB but not DISK_GB (in other words, these are compute nodes with no local disk). There is a resource provider called "NFS_SHARE" that has an inventory of DISK_GB and has the MISC_SHARES_VIA_AGGREGATE trait. Both the "CN_1" and "CN_2" compute node resource providers and the "NFS_SHARE" resource provider are associated with an aggregate called "AGG_1". The scheduler needs to determine the resource providers that can fulfill a request for 2 VCPU, 1024 MEMORY_MB and 100 DISK_GB. Clearly, no single provider can satisfy the request for all three resources, since neither compute node has DISK_GB inventory and the NFS_SHARE provider has no VCPU or MEMORY_MB inventories. However, if we consider the NFS_SHARE resource provider as providing inventory of DISK_GB for both CN_1 and CN_2, we can include CN_1 and CN_2 as potential fits for the requested set of resources. To facilitate that matching query, this function returns all providers that indicate they share their inventory with providers in some aggregate. :param rp_ids: When present, returned resource providers are limited to only those in this value """ # The SQL we need to generate here looks like this: # # SELECT rp.id # FROM resource_providers AS rp # INNER JOIN resource_provider_traits AS rpt # ON rp.id = rpt.resource_provider_id # AND rpt.trait_id = ${"MISC_SHARES_VIA_AGGREGATE" trait id} # WHERE rp.id IN $(RP_IDs) sharing_trait = trait_obj.Trait.get_by_name( ctx, os_traits.MISC_SHARES_VIA_AGGREGATE) rp_tbl = sa.alias(_RP_TBL, name='rp') rpt_tbl = sa.alias(_RP_TRAIT_TBL, name='rpt') rp_to_rpt_join = sa.join( rp_tbl, rpt_tbl, sa.and_(rp_tbl.c.id == rpt_tbl.c.resource_provider_id, rpt_tbl.c.trait_id == sharing_trait.id) ) sel = sa.select(rp_tbl.c.id).select_from(rp_to_rpt_join) if rp_ids: sel = sel.where(rp_tbl.c.id.in_(rp_ids)) return set(r[0] for r in ctx.session.execute(sel)) @db_api.placement_context_manager.reader def anchors_for_sharing_providers(context, rp_ids): """Given a list of internal IDs of sharing providers, returns a set of AnchorIds namedtuples, where each anchor is the unique root provider of a tree associated with the same aggregate as the sharing provider. (These are the providers that can "anchor" a single AllocationRequest.) The sharing provider may or may not itself be part of a tree; in either case, an entry for this root provider is included in the result. If the sharing provider is not part of any aggregate, the empty list is returned. """ # SELECT sps.id, sps.uuid, rps.id, rps.uuid) # FROM resource_providers AS sps # INNER JOIN resource_provider_aggregates AS shr_aggs # ON sps.id = shr_aggs.resource_provider_id # INNER JOIN resource_provider_aggregates AS shr_with_sps_aggs # ON shr_aggs.aggregate_id = shr_with_sps_aggs.aggregate_id # INNER JOIN resource_providers AS shr_with_sps # ON shr_with_sps_aggs.resource_provider_id = shr_with_sps.id # INNER JOIN resource_providers AS rps # ON shr_with_sps.root_provider_id = rps.id # WHERE sps.id IN $(RP_IDs) rps = sa.alias(_RP_TBL, name='rps') sps = sa.alias(_RP_TBL, name='sps') shr_aggs = sa.alias(_RP_AGG_TBL, name='shr_aggs') shr_with_sps_aggs = sa.alias(_RP_AGG_TBL, name='shr_with_sps_aggs') shr_with_sps = sa.alias(_RP_TBL, name='shr_with_sps') join_chain = sa.join( sps, shr_aggs, sps.c.id == shr_aggs.c.resource_provider_id) join_chain = sa.join( join_chain, shr_with_sps_aggs, shr_aggs.c.aggregate_id == shr_with_sps_aggs.c.aggregate_id) join_chain = sa.join( join_chain, shr_with_sps, shr_with_sps_aggs.c.resource_provider_id == shr_with_sps.c.id) join_chain = sa.join( join_chain, rps, shr_with_sps.c.root_provider_id == rps.c.id) sel = sa.select(sps.c.id, sps.c.uuid, rps.c.id, rps.c.uuid) sel = sel.select_from(join_chain) sel = sel.where(sps.c.id.in_(rp_ids)) return set([ AnchorIds(*res) for res in context.session.execute(sel).fetchall()]) @db_api.placement_context_manager.reader def _has_provider_trees(ctx): """Simple method that returns whether provider trees (i.e. nested resource providers) are in use in the deployment at all. This information is used to switch code paths when attempting to retrieve allocation candidate information. The code paths are eminently easier to execute and follow for non-nested scenarios... NOTE(jaypipes): The result of this function can be cached extensively. """ sel = sa.select(_RP_TBL.c.id) sel = sel.where(_RP_TBL.c.parent_provider_id.isnot(None)) sel = sel.limit(1) res = ctx.session.execute(sel).fetchall() return len(res) > 0 def get_usages_by_provider_trees(ctx, root_ids): """Returns a row iterator of usage records grouped by provider ID for all resource providers in all trees indicated in the ``root_ids``. """ # We build up a SQL expression that looks like this: # SELECT # rp.id as resource_provider_id # , rp.uuid as resource_provider_uuid # , inv.resource_class_id # , inv.total # , inv.reserved # , inv.allocation_ratio # , inv.max_unit # , usage.used # FROM resource_providers AS rp # LEFT JOIN inventories AS inv # ON rp.id = inv.resource_provider_id # LEFT JOIN ( # SELECT resource_provider_id, resource_class_id, SUM(used) as used # FROM allocations # JOIN resource_providers # ON allocations.resource_provider_id = resource_providers.id # AND resource_providers.root_provider_id IN($root_ids) # GROUP BY resource_provider_id, resource_class_id # ) # AS usage # ON inv.resource_provider_id = usage.resource_provider_id # AND inv.resource_class_id = usage.resource_class_id # WHERE rp.root_provider_id IN ($root_ids) rpt = sa.alias(_RP_TBL, name="rp") inv = sa.alias(_INV_TBL, name="inv") # Build our derived table (subquery in the FROM clause) that sums used # amounts for resource provider and resource class derived_alloc_to_rp = sa.join( _ALLOC_TBL, _RP_TBL, sa.and_(_ALLOC_TBL.c.resource_provider_id == _RP_TBL.c.id, _RP_TBL.c.root_provider_id.in_(sa.bindparam( 'root_ids', expanding=True))) ) usage = sa.select( _ALLOC_TBL.c.resource_provider_id, _ALLOC_TBL.c.resource_class_id, sql.func.sum(_ALLOC_TBL.c.used).label('used'), ).select_from(derived_alloc_to_rp).group_by( _ALLOC_TBL.c.resource_provider_id, _ALLOC_TBL.c.resource_class_id ).subquery(name='usage') # Build a join between the resource providers and inventories table rpt_inv_join = sa.outerjoin(rpt, inv, rpt.c.id == inv.c.resource_provider_id) # And then join to the derived table of usages usage_join = sa.outerjoin( rpt_inv_join, usage, sa.and_( usage.c.resource_provider_id == inv.c.resource_provider_id, usage.c.resource_class_id == inv.c.resource_class_id, ), ) query = sa.select( rpt.c.id.label("resource_provider_id"), rpt.c.uuid.label("resource_provider_uuid"), inv.c.resource_class_id, inv.c.total, inv.c.reserved, inv.c.allocation_ratio, inv.c.max_unit, usage.c.used, ).select_from(usage_join).where( rpt.c.root_provider_id.in_(sa.bindparam( 'root_ids', expanding=True)) ) return ctx.session.execute(query, {'root_ids': list(root_ids)}).fetchall() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/reshaper.py0000664000175000017500000001340400000000000023423 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from placement import db_api from placement.objects import allocation as alloc_obj from placement.objects import inventory as inv_obj LOG = logging.getLogger(__name__) @db_api.placement_context_manager.writer def reshape(ctx, inventories, allocations): """The 'replace the world' strategy that is executed when we want to completely replace a set of provider inventory, allocation and consumer information in a single transaction. :note: The reason this has to be done in a single monolithic function is so we have a single top-level function on which to decorate with the @db_api.placement_context_manager.writer transaction context manager. Each time a top-level function that is decorated with this exits, the transaction is either COMMIT'd or ROLLBACK'd. We need to avoid calling two functions that are already decorated with a transaction context manager from a function that *isn't* decorated with the transaction context manager if we want all changes involved in the sub-functions to operate within a single DB transaction. :param ctx: `placement.context.RequestContext` object containing the DB transaction context. :param inventories: dict, keyed by ResourceProvider, of lists of `Inventory` objects representing the replaced inventory information for the provider. :param allocations: `AllocationList` object containing all allocations for all consumers being modified by the reshape operation. :raises: `exception.ConcurrentUpdateDetected` when any resource provider or consumer generation increment fails due to concurrent changes to the same objects. """ # The resource provider objects, keyed by provider UUID, that are involved # in this transaction. We keep a cache of these because as we perform the # various operations on the providers, their generations increment and we # want to "inject" the changed resource provider objects into the # AllocationList's objects before calling AllocationList.replace_all(). # We start with the providers in the allocation objects, but only use one # if we don't find it in the inventories. affected_providers = {alloc.resource_provider.uuid: alloc.resource_provider for alloc in allocations} # We have to do the inventory changes in two steps because: # - we can't delete inventories with allocations; and # - we can't create allocations on nonexistent inventories. # So in the first step we create a kind of "union" inventory for each # provider. It contains all the inventories that the request wishes to # exist in the end, PLUS any inventories that the request wished to remove # (in their original form). # Note that this can cause us to end up with an interim situation where we # have modified an inventory to have less capacity than is currently # allocated, but that's allowed by the code. If the final picture is # overcommitted, we'll get an appropriate exception when we replace the # allocations at the end. for rp, new_inv_list in inventories.items(): LOG.debug("reshaping: *interim* inventory replacement for provider %s", rp.uuid) # Update the cache. This may be replacing an entry that came from # allocations, or adding a new entry from inventories. affected_providers[rp.uuid] = rp # Optimization: If the new inventory is empty, the below would be # replacing it with itself (and incrementing the generation) # unnecessarily. if not new_inv_list: continue # A dict, keyed by resource class, of the Inventory objects. We start # with the original inventory list. inv_by_rc = { inv.resource_class: inv for inv in inv_obj.get_all_by_resource_provider(ctx, rp)} # Now add each inventory in the new inventory list. If an inventory for # that resource class existed in the original inventory list, it is # overwritten. for inv in new_inv_list: inv_by_rc[inv.resource_class] = inv # Set the interim inventory structure. rp.set_inventory(list(inv_by_rc.values())) # NOTE(jaypipes): The above inventory replacements will have # incremented the resource provider generations, so we need to look in # the AllocationList and swap the resource provider object with the one we # saved above that has the updated provider generation in it. for alloc in allocations: rp_uuid = alloc.resource_provider.uuid if rp_uuid in affected_providers: alloc.resource_provider = affected_providers[rp_uuid] # Now we can replace all the allocations LOG.debug("reshaping: attempting allocation replacement") alloc_obj.replace_all(ctx, allocations) # And finally, we can set the inventories to their actual desired state. for rp, new_inv_list in inventories.items(): LOG.debug("reshaping: *final* inventory replacement for provider %s", rp.uuid) rp.set_inventory(new_inv_list) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/resource_class.py0000664000175000017500000002370500000000000024633 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os_resource_classes as orc from oslo_concurrency import lockutils from oslo_db import api as oslo_db_api from oslo_db import exception as db_exc from oslo_log import log as logging import sqlalchemy as sa from sqlalchemy import func from placement.db.sqlalchemy import models from placement import db_api from placement import exception _RC_TBL = models.ResourceClass.__table__ _RESOURCE_CLASSES_LOCK = 'resource_classes_sync' _RESOURCE_CLASSES_SYNCED = False LOG = logging.getLogger(__name__) class ResourceClass(object): MIN_CUSTOM_RESOURCE_CLASS_ID = 10000 """Any user-defined resource classes must have an identifier greater than or equal to this number. """ # Retry count for handling possible race condition in creating resource # class. We don't ever want to hit this, as it is simply a race when # creating these classes, but this is just a stopgap to prevent a potential # infinite loop. RESOURCE_CREATE_RETRY_COUNT = 100 def __init__(self, context, id=None, name=None, updated_at=None, created_at=None): self._context = context self.id = id self.name = name self.updated_at = updated_at self.created_at = created_at @staticmethod def _from_db_object(context, target, source): target._context = context target.id = source['id'] target.name = source['name'] target.updated_at = source['updated_at'] target.created_at = source['created_at'] return target @classmethod def get_by_name(cls, context, name): """Return a ResourceClass object with the given string name. :param name: String name of the resource class to find :raises: ResourceClassNotFound if no such resource class was found """ rc = context.rc_cache.all_from_string(name) obj = cls( context, id=rc.id, name=rc.name, updated_at=rc.updated_at, created_at=rc.created_at, ) return obj @staticmethod @db_api.placement_context_manager.reader def _get_next_id(context): """Utility method to grab the next resource class identifier to use for user-defined resource classes. """ query = context.session.query(func.max(models.ResourceClass.id)) max_id = query.one()[0] if not max_id or max_id < ResourceClass.MIN_CUSTOM_RESOURCE_CLASS_ID: return ResourceClass.MIN_CUSTOM_RESOURCE_CLASS_ID else: return max_id + 1 def create(self): if self.id is not None: raise exception.ObjectActionError(action='create', reason='already created') if not self.name: raise exception.ObjectActionError(action='create', reason='name is required') if self.name in orc.STANDARDS: raise exception.ResourceClassExists(resource_class=self.name) if not self.name.startswith(orc.CUSTOM_NAMESPACE): raise exception.ObjectActionError( action='create', reason='name must start with ' + orc.CUSTOM_NAMESPACE) updates = {} for field in ['name', 'updated_at', 'created_at']: value = getattr(self, field, None) if value: updates[field] = value # There is the possibility of a race when adding resource classes, as # the ID is generated locally. This loop catches that exception, and # retries until either it succeeds, or a different exception is # encountered. retries = self.RESOURCE_CREATE_RETRY_COUNT while retries: retries -= 1 try: rc = self._create_in_db(self._context, updates) self._from_db_object(self._context, self, rc) break except db_exc.DBDuplicateEntry as e: if 'id' in e.columns: # Race condition for ID creation; try again continue # The duplication is on the other unique column, 'name'. So do # not retry; raise the exception immediately. raise exception.ResourceClassExists(resource_class=self.name) else: # We have no idea how common it will be in practice for the retry # limit to be exceeded. We set it high in the hope that we never # hit this point, but added this log message so we know that this # specific situation occurred. LOG.warning("Exceeded retry limit on ID generation while " "creating ResourceClass %(name)s", {'name': self.name}) msg = "creating resource class %s" % self.name raise exception.MaxDBRetriesExceeded(action=msg) self._context.rc_cache.clear() @staticmethod @db_api.placement_context_manager.writer def _create_in_db(context, updates): next_id = ResourceClass._get_next_id(context) rc = models.ResourceClass() rc.update(updates) rc.id = next_id context.session.add(rc) return rc def destroy(self): if self.id is None: raise exception.ObjectActionError(action='destroy', reason='ID attribute not found') # Never delete any standard resource class. if self.id < ResourceClass.MIN_CUSTOM_RESOURCE_CLASS_ID: raise exception.ResourceClassCannotDeleteStandard( resource_class=self.name) self._destroy(self._context, self.id, self.name) self._context.rc_cache.clear() @staticmethod @db_api.placement_context_manager.writer def _destroy(context, _id, name): # Don't delete the resource class if it is referred to in the # inventories table. num_inv = context.session.query(models.Inventory).filter( models.Inventory.resource_class_id == _id).count() if num_inv: raise exception.ResourceClassInUse(resource_class=name) res = context.session.query(models.ResourceClass).filter( models.ResourceClass.id == _id).delete() if not res: raise exception.NotFound() def save(self): if self.id is None: raise exception.ObjectActionError(action='save', reason='ID attribute not found') updates = {} for field in ['name', 'updated_at', 'created_at']: value = getattr(self, field, None) if value: updates[field] = value # Never update any standard resource class. if self.id < ResourceClass.MIN_CUSTOM_RESOURCE_CLASS_ID: raise exception.ResourceClassCannotUpdateStandard( resource_class=self.name) self._save(self._context, self.id, self.name, updates) self._context.rc_cache.clear() @staticmethod @db_api.placement_context_manager.writer def _save(context, id, name, updates): db_rc = context.session.query(models.ResourceClass).filter_by( id=id).first() db_rc.update(updates) try: db_rc.save(context.session) except db_exc.DBDuplicateEntry: raise exception.ResourceClassExists(resource_class=name) def ensure_sync(ctx): global _RESOURCE_CLASSES_SYNCED # If another thread is doing this work, wait for it to complete. # When that thread is done _RESOURCE_CLASSES_SYNCED will be true in this # thread and we'll simply return. with lockutils.lock(_RESOURCE_CLASSES_LOCK): if not _RESOURCE_CLASSES_SYNCED: _resource_classes_sync(ctx) _RESOURCE_CLASSES_SYNCED = True def get_all(context): """Get a list of all the resource classes in the database.""" resource_classes = context.rc_cache.get_all() return [ResourceClass(context, **rc._mapping) for rc in resource_classes] @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) @db_api.placement_context_manager.writer def _resource_classes_sync(ctx): # Create a set of all resource class in the os_resource_classes library. sel = sa.select(_RC_TBL.c.name) res = ctx.session.execute(sel).fetchall() db_classes = [r[0] for r in res if not orc.is_custom(r[0])] LOG.debug("Found existing resource classes in db: %s", db_classes) # Determine those resource classes which are in os_resource_classes but not # currently in the database, and insert them. batch_args = [{'name': str(name), 'id': index} for index, name in enumerate(orc.STANDARDS) if name not in db_classes] ins = _RC_TBL.insert() if batch_args: conn = ctx.session.connection() if conn.engine.dialect.name == 'mysql': # We need to do a literal insert of 0 to preserve the order # of the resource class ids from the previous style of # managing them. In some mysql settings a 0 is the same as # "give me a default key". conn.execute( sa.text("SET SESSION SQL_MODE='NO_AUTO_VALUE_ON_ZERO'") ) try: ctx.session.execute(ins, batch_args) LOG.debug("Synced resource_classes from os_resource_classes: %s", batch_args) except db_exc.DBDuplicateEntry: pass # some other process sync'd, just ignore ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/resource_provider.py0000664000175000017500000012744000000000000025361 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy # NOTE(cdent): The resource provider objects are designed to never be # used over RPC. Remote manipulation is done with the placement HTTP # API. The 'remotable' decorators should not be used, the objects should # not be registered and there is no need to express VERSIONs nor handle # obj_make_compatible. from oslo_db import api as oslo_db_api from oslo_db import exception as db_exc from oslo_log import log as logging from oslo_utils import excutils import sqlalchemy as sa from sqlalchemy import exc as sqla_exc from sqlalchemy import func from placement.db.sqlalchemy import models from placement import db_api from placement import exception from placement.objects import inventory as inv_obj from placement.objects import research_context as res_ctx from placement.objects import trait as trait_obj _ALLOC_TBL = models.Allocation.__table__ _INV_TBL = models.Inventory.__table__ _RP_TBL = models.ResourceProvider.__table__ _AGG_TBL = models.PlacementAggregate.__table__ _RP_AGG_TBL = models.ResourceProviderAggregate.__table__ _RP_TRAIT_TBL = models.ResourceProviderTrait.__table__ LOG = logging.getLogger(__name__) def _get_current_inventory_resources(ctx, rp): """Returns a set() containing the resource class IDs for all resources currently having an inventory record for the supplied resource provider. :param ctx: `placement.context.RequestContext` that may be used to grab a DB connection. :param rp: Resource provider to query inventory for. """ cur_res_sel = sa.select(_INV_TBL.c.resource_class_id).where( _INV_TBL.c.resource_provider_id == rp.id) existing_resources = ctx.session.execute(cur_res_sel).fetchall() return set([r[0] for r in existing_resources]) def _delete_inventory_from_provider(ctx, rp, to_delete): """Deletes any inventory records from the supplied provider and set() of resource class identifiers. If there are allocations for any of the inventories to be deleted raise InventoryInUse exception. :param ctx: `placement.context.RequestContext` that contains an oslo_db Session :param rp: Resource provider from which to delete inventory. :param to_delete: set() containing resource class IDs for records to delete. """ allocation_query = sa.select( _ALLOC_TBL.c.resource_class_id.label('resource_class'), ).where( sa.and_(_ALLOC_TBL.c.resource_provider_id == rp.id, _ALLOC_TBL.c.resource_class_id.in_(to_delete)) ).group_by(_ALLOC_TBL.c.resource_class_id) allocations = ctx.session.execute(allocation_query).fetchall() if allocations: resource_classes = ', '.join( [ctx.rc_cache.string_from_id(alloc[0]) for alloc in allocations]) raise exception.InventoryInUse(resource_classes=resource_classes, resource_provider=rp.uuid) del_stmt = _INV_TBL.delete().where( sa.and_( _INV_TBL.c.resource_provider_id == rp.id, _INV_TBL.c.resource_class_id.in_(to_delete))) res = ctx.session.execute(del_stmt) return res.rowcount def _add_inventory_to_provider(ctx, rp, inv_list, to_add): """Inserts new inventory records for the supplied resource provider. :param ctx: `placement.context.RequestContext` that contains an oslo_db Session :param rp: Resource provider to add inventory to. :param inv_list: List of Inventory objects :param to_add: set() containing resource class IDs to search inv_list for adding to resource provider. """ for rc_id in to_add: rc_str = ctx.rc_cache.string_from_id(rc_id) inv_record = inv_obj.find(inv_list, rc_str) ins_stmt = _INV_TBL.insert().values( resource_provider_id=rp.id, resource_class_id=rc_id, total=inv_record.total, reserved=inv_record.reserved, min_unit=inv_record.min_unit, max_unit=inv_record.max_unit, step_size=inv_record.step_size, allocation_ratio=inv_record.allocation_ratio) ctx.session.execute(ins_stmt) def _update_inventory_for_provider(ctx, rp, inv_list, to_update): """Updates existing inventory records for the supplied resource provider. :param ctx: `placement.context.RequestContext` that contains an oslo_db Session :param rp: Resource provider on which to update inventory. :param inv_list: List of Inventory objects :param to_update: set() containing resource class IDs to search inv_list for updating in resource provider. :returns: A list of (uuid, class) tuples that have exceeded their capacity after this inventory update. """ exceeded = [] for rc_id in to_update: rc_str = ctx.rc_cache.string_from_id(rc_id) inv_record = inv_obj.find(inv_list, rc_str) allocation_query = sa.select( func.sum(_ALLOC_TBL.c.used).label('usage')) allocation_query = allocation_query.where( sa.and_( _ALLOC_TBL.c.resource_provider_id == rp.id, _ALLOC_TBL.c.resource_class_id == rc_id)) allocations = ctx.session.execute(allocation_query).first() if ( allocations and allocations.usage is not None and allocations.usage > inv_record.capacity ): exceeded.append((rp.uuid, rc_str)) upd_stmt = _INV_TBL.update().where( sa.and_( _INV_TBL.c.resource_provider_id == rp.id, _INV_TBL.c.resource_class_id == rc_id) ).values( total=inv_record.total, reserved=inv_record.reserved, min_unit=inv_record.min_unit, max_unit=inv_record.max_unit, step_size=inv_record.step_size, allocation_ratio=inv_record.allocation_ratio) res = ctx.session.execute(upd_stmt) if not res.rowcount: raise exception.InventoryWithResourceClassNotFound( resource_class=rc_str) return exceeded @db_api.placement_context_manager.writer def _add_inventory(context, rp, inventory): """Add one Inventory that wasn't already on the provider. :raises `exception.ResourceClassNotFound` if inventory.resource_class cannot be found in the DB. """ rc_id = context.rc_cache.id_from_string(inventory.resource_class) _add_inventory_to_provider( context, rp, [inventory], set([rc_id])) rp.increment_generation() @db_api.placement_context_manager.writer def _update_inventory(context, rp, inventory): """Update an inventory already on the provider. :raises `exception.ResourceClassNotFound` if inventory.resource_class cannot be found in the DB. """ rc_id = context.rc_cache.id_from_string(inventory.resource_class) exceeded = _update_inventory_for_provider( context, rp, [inventory], set([rc_id])) rp.increment_generation() return exceeded @db_api.placement_context_manager.writer def _delete_inventory(context, rp, resource_class): """Delete up to one Inventory of the given resource_class string. :raises `exception.ResourceClassNotFound` if resource_class cannot be found in the DB. """ rc_id = context.rc_cache.id_from_string(resource_class) if not _delete_inventory_from_provider(context, rp, [rc_id]): raise exception.NotFound( 'No inventory of class %s found for delete' % resource_class) rp.increment_generation() @db_api.placement_context_manager.writer def _set_inventory(context, rp, inv_list): """Given a list of Inventory objects, replaces the inventory of the resource provider in a safe, atomic fashion using the resource provider's generation as a consistent view marker. :param context: Nova RequestContext. :param rp: `ResourceProvider` object upon which to set inventory. :param inv_list: A list of `Inventory` objects to save to backend storage. :returns: A list of (uuid, class) tuples that have exceeded their capacity after this inventory update. :raises placement.exception.ConcurrentUpdateDetected: if another thread updated the same resource provider's view of its inventory or allocations in between the time when this object was originally read and the call to set the inventory. :raises `exception.ResourceClassNotFound` if any resource class in any inventory in inv_list cannot be found in the DB. :raises `exception.InventoryInUse` if we attempt to delete inventory from a provider that has allocations for that resource class. """ existing_resources = _get_current_inventory_resources(context, rp) these_resources = set([context.rc_cache.id_from_string(r.resource_class) for r in inv_list]) # Determine which resources we should be adding, deleting and/or # updating in the resource provider's inventory by comparing sets # of resource class identifiers. to_add = these_resources - existing_resources to_delete = existing_resources - these_resources to_update = these_resources & existing_resources exceeded = [] if to_delete: _delete_inventory_from_provider(context, rp, to_delete) if to_add: _add_inventory_to_provider(context, rp, inv_list, to_add) if to_update: exceeded = _update_inventory_for_provider(context, rp, inv_list, to_update) # Here is where we update the resource provider's generation value. If # this update updates zero rows, that means that another thread has updated # the inventory for this resource provider between the time the caller # originally read the resource provider record and inventory information # and this point. We raise an exception here which will rollback the above # transaction and return an error to the caller to indicate that they can # attempt to retry the inventory save after reverifying any capacity # conditions and re-reading the existing inventory information. rp.increment_generation() return exceeded @db_api.placement_context_manager.reader def _get_provider_by_uuid(context, uuid): """Given a UUID, return a dict of information about the resource provider from the database. :raises: NotFound if no such provider was found :param uuid: The UUID to look up """ rpt = sa.alias(_RP_TBL, name="rp") parent = sa.alias(_RP_TBL, name="parent") root = sa.alias(_RP_TBL, name="root") rp_to_root = sa.join(rpt, root, rpt.c.root_provider_id == root.c.id) rp_to_parent = sa.outerjoin( rp_to_root, parent, rpt.c.parent_provider_id == parent.c.id) sel = sa.select( rpt.c.id, rpt.c.uuid, rpt.c.name, rpt.c.generation, root.c.uuid.label("root_provider_uuid"), parent.c.uuid.label("parent_provider_uuid"), rpt.c.updated_at, rpt.c.created_at, ).select_from(rp_to_parent).where(rpt.c.uuid == uuid) res = context.session.execute(sel).fetchone() if not res: raise exception.NotFound( 'No resource provider with uuid %s found' % uuid) return dict(res._mapping) @db_api.placement_context_manager.reader def _get_aggregates_by_provider_id(context, rp_id): """Returns a dict, keyed by internal aggregate ID, of aggregate UUIDs associated with the supplied internal resource provider ID. """ join_statement = sa.join( _AGG_TBL, _RP_AGG_TBL, sa.and_( _AGG_TBL.c.id == _RP_AGG_TBL.c.aggregate_id, _RP_AGG_TBL.c.resource_provider_id == rp_id)) sel = sa.select(_AGG_TBL.c.id, _AGG_TBL.c.uuid).select_from( join_statement) return {r[0]: r[1] for r in context.session.execute(sel).fetchall()} def _ensure_aggregate(ctx, agg_uuid): """Finds an aggregate and returns its internal ID. If not found, creates the aggregate and returns the new aggregate's internal ID. If there is a race to create the aggregate (which can happen under rare high load conditions), retry up to 10 times. """ sel = sa.select(_AGG_TBL.c.id).where(_AGG_TBL.c.uuid == agg_uuid) res = ctx.session.execute(sel).fetchone() if res: return res[0] LOG.debug("_ensure_aggregate() did not find aggregate %s. " "Attempting to create it.", agg_uuid) try: ins_stmt = _AGG_TBL.insert().values(uuid=agg_uuid) res = ctx.session.execute(ins_stmt) agg_id = res.inserted_primary_key[0] LOG.debug("_ensure_aggregate() created new aggregate %s (id=%d).", agg_uuid, agg_id) return agg_id except db_exc.DBDuplicateEntry: # Something else added this agg_uuid in between our initial # fetch above and when we tried flushing this session. with excutils.save_and_reraise_exception(): LOG.debug("_ensure_provider() failed to create new aggregate %s. " "Another thread already created an aggregate record. ", agg_uuid) # _ensure_aggregate() can raise DBDuplicateEntry. Then we must start a new # transaction because the new aggregate entry can't be found in the old # transaction if the isolation level is set to "REPEATABLE_READ" @oslo_db_api.wrap_db_retry( max_retries=10, inc_retry_interval=False, exception_checker=lambda exc: isinstance(exc, db_exc.DBDuplicateEntry)) @db_api.placement_context_manager.writer def _set_aggregates(context, resource_provider, provided_aggregates, increment_generation=False): rp_id = resource_provider.id # When aggregate uuids are persisted no validation is done # to ensure that they refer to something that has meaning # elsewhere. It is assumed that code which makes use of the # aggregates, later, will validate their fitness. # TODO(cdent): At the moment we do not delete # a PlacementAggregate that no longer has any associations # with at least one resource provider. We may wish to do that # to avoid bloat if it turns out we're creating a lot of noise. # Not doing now to move things along. provided_aggregates = set(provided_aggregates) existing_aggregates = _get_aggregates_by_provider_id(context, rp_id) agg_uuids_to_add = provided_aggregates - set(existing_aggregates.values()) # A dict, keyed by internal aggregate ID, of aggregate UUIDs that will be # associated with the provider aggs_to_associate = {} # Same dict for those aggregates to remove the association with this # provider aggs_to_disassociate = { agg_id: agg_uuid for agg_id, agg_uuid in existing_aggregates.items() if agg_uuid not in provided_aggregates } # Create any aggregates that do not yet exist in # PlacementAggregates. This is different from # the set in existing_aggregates; those are aggregates for # which there are associations for the resource provider # at rp_id. The following loop checks for the existence of any # aggregate with the provided uuid. In this way we only # create a new row in the PlacementAggregate table if the # aggregate uuid has never been seen before. Code further # below will update the associations. for agg_uuid in agg_uuids_to_add: agg_id = _ensure_aggregate(context, agg_uuid) aggs_to_associate[agg_id] = agg_uuid for agg_id, agg_uuid in aggs_to_associate.items(): try: ins_stmt = _RP_AGG_TBL.insert().values( resource_provider_id=rp_id, aggregate_id=agg_id) context.session.execute(ins_stmt) LOG.debug("Setting aggregates for provider %s. Successfully " "associated aggregate %s.", resource_provider.uuid, agg_uuid) except db_exc.DBDuplicateEntry: LOG.debug("Setting aggregates for provider %s. Another thread " "already associated aggregate %s. Skipping.", resource_provider.uuid, agg_uuid) pass for agg_id, agg_uuid in aggs_to_disassociate.items(): del_stmt = _RP_AGG_TBL.delete().where( sa.and_( _RP_AGG_TBL.c.resource_provider_id == rp_id, _RP_AGG_TBL.c.aggregate_id == agg_id)) context.session.execute(del_stmt) LOG.debug("Setting aggregates for provider %s. Successfully " "disassociated aggregate %s.", resource_provider.uuid, agg_uuid) if increment_generation: resource_provider.increment_generation() def _add_traits_to_provider(ctx, rp_id, to_add): """Adds trait associations to the provider with the supplied ID. :param ctx: `placement.context.RequestContext` that has an oslo_db Session :param rp_id: Internal ID of the resource provider on which to add trait associations :param to_add: set() containing internal trait IDs for traits to add """ for trait_id in to_add: try: ins_stmt = _RP_TRAIT_TBL.insert().values( resource_provider_id=rp_id, trait_id=trait_id) ctx.session.execute(ins_stmt) except db_exc.DBDuplicateEntry: # Another thread already set this trait for this provider. Ignore # this for now (but ConcurrentUpdateDetected will end up being # raised almost assuredly when we go to increment the resource # provider's generation later, but that's also fine) pass def _delete_traits_from_provider(ctx, rp_id, to_delete): """Deletes trait associations from the provider with the supplied ID and set() of internal trait IDs. :param ctx: `placement.context.RequestContext` that has an oslo_db Session :param rp_id: Internal ID of the resource provider from which to delete trait associations :param to_delete: set() containing internal trait IDs for traits to delete """ del_stmt = _RP_TRAIT_TBL.delete().where( sa.and_( _RP_TRAIT_TBL.c.resource_provider_id == rp_id, _RP_TRAIT_TBL.c.trait_id.in_(to_delete))) ctx.session.execute(del_stmt) @db_api.placement_context_manager.writer def _set_traits(context, rp, traits): """Given a ResourceProvider object and a list of Trait objects, replaces the set of traits associated with the resource provider. :raises: ConcurrentUpdateDetected if the resource provider's traits or inventory was changed in between the time when we first started to set traits and the end of this routine. :param rp: The ResourceProvider object to set traits against :param traits: List of Trait objects """ # Get the internal IDs of our existing traits existing_traits = trait_obj.get_traits_by_provider_id(context, rp.id) existing_traits = set(rec.id for rec in existing_traits) want_traits = set(trait.id for trait in traits) to_add = want_traits - existing_traits to_delete = existing_traits - want_traits if not to_add and not to_delete: return if to_delete: _delete_traits_from_provider(context, rp.id, to_delete) if to_add: _add_traits_to_provider(context, rp.id, to_add) rp.increment_generation() @db_api.placement_context_manager.reader def _has_child_providers(context, rp_id): """Returns True if the supplied resource provider has any child providers, False otherwise """ child_sel = sa.select(_RP_TBL.c.id) child_sel = child_sel.where(_RP_TBL.c.parent_provider_id == rp_id) child_res = context.session.execute(child_sel.limit(1)).fetchone() if child_res: return True return False @db_api.placement_context_manager.writer def set_root_provider_ids(context, batch_size): """Simply sets the root_provider_id value for a provider identified by rp_id. Used in explicit online data migration via CLI. :param rp_id: Internal ID of the provider to update :param root_id: Value to set root provider to """ # UPDATE resource_providers # SET root_provider_id=resource_providers.id # WHERE resource_providers.id # IN (SELECT subq_1.id # FROM (SELECT resource_providers.id AS id # FROM resource_providers # WHERE resource_providers.root_provider_id IS NULL # LIMIT :param_1) # AS subq_1) subq_1 = context.session.query(_RP_TBL.c.id) subq_1 = subq_1.filter(_RP_TBL.c.root_provider_id.is_(None)) subq_1 = subq_1.limit(batch_size) subq_1 = subq_1.subquery(name="subq_1") subq_2 = sa.select(subq_1.c.id).select_from(subq_1).scalar_subquery() upd = _RP_TBL.update().where(_RP_TBL.c.id.in_(subq_2)) upd = upd.values(root_provider_id=_RP_TBL.c.id) res = context.session.execute(upd) return res.rowcount, res.rowcount @db_api.placement_context_manager.writer def _delete_rp_record(context, _id): query = context.session.query(models.ResourceProvider) query = query.filter(models.ResourceProvider.id == _id) return query.delete(synchronize_session=False) class ResourceProvider(object): SETTABLE_FIELDS = ('name', 'parent_provider_uuid') __slots__ = ('_context', 'id', 'uuid', 'name', 'generation', 'parent_provider_uuid', 'root_provider_uuid', 'updated_at', 'created_at') def __init__(self, context, id=None, uuid=None, name=None, generation=None, parent_provider_uuid=None, root_provider_uuid=None, updated_at=None, created_at=None): self._context = context self.id = id self.uuid = uuid self.name = name self.generation = generation # UUID of the root provider in a hierarchy of providers. Will be equal # to the uuid field if this provider is the root provider of a # hierarchy. This field is never manually set by the user. Instead, it # is automatically set to either the root provider UUID of the parent # or the UUID of the provider itself if there is no parent. This field # is an optimization field that allows us to very quickly query for all # providers within a particular tree without doing any recursive # querying. self.root_provider_uuid = root_provider_uuid # UUID of the direct parent provider, or None if this provider is a # "root" provider. self.parent_provider_uuid = parent_provider_uuid self.updated_at = updated_at self.created_at = created_at def create(self): if self.id is not None: raise exception.ObjectActionError(action='create', reason='already created') if self.uuid is None: raise exception.ObjectActionError(action='create', reason='uuid is required') if not self.name: raise exception.ObjectActionError(action='create', reason='name is required') # These are the only fields we are willing to create with. # If there are others, ignore them. updates = { 'name': self.name, 'uuid': self.uuid, 'parent_provider_uuid': self.parent_provider_uuid, } self._create_in_db(self._context, updates) def destroy(self): self._delete(self._context, self.id) def save(self, allow_reparenting=False): """Save the changes to the database :param allow_reparenting: If True then it allows changing the parent RP to a different RP as well as changing it to None (un-parenting). If False, then only changing the parent from None to an RP is allowed the rest is rejected with ObjectActionError. """ # These are the only fields we are willing to save with. # If there are others, ignore them. updates = { 'name': self.name, 'parent_provider_uuid': self.parent_provider_uuid, } self._update_in_db(self._context, self.id, updates, allow_reparenting) @classmethod def get_by_uuid(cls, context, uuid): """Returns a new ResourceProvider object with the supplied UUID. :raises NotFound if no such provider could be found :param uuid: UUID of the provider to search for """ rp_rec = _get_provider_by_uuid(context, uuid) return cls._from_db_object(context, cls(context), rp_rec) def add_inventory(self, inventory): """Add one new Inventory to the resource provider. Fails if Inventory of the provided resource class is already present. """ _add_inventory(self._context, self, inventory) def delete_inventory(self, resource_class): """Delete Inventory of provided resource_class.""" _delete_inventory(self._context, self, resource_class) def set_inventory(self, inv_list): """Set all resource provider Inventory to be the provided list.""" exceeded = _set_inventory(self._context, self, inv_list) for uuid, rclass in exceeded: LOG.warning('Resource provider %(uuid)s is now over-' 'capacity for %(resource)s', {'uuid': uuid, 'resource': rclass}) def update_inventory(self, inventory): """Update one existing Inventory of the same resource class. Fails if no Inventory of the same class is present. """ exceeded = _update_inventory(self._context, self, inventory) for uuid, rclass in exceeded: LOG.warning('Resource provider %(uuid)s is now over-' 'capacity for %(resource)s', {'uuid': uuid, 'resource': rclass}) def get_aggregates(self): """Get the aggregate uuids associated with this resource provider.""" return list( _get_aggregates_by_provider_id(self._context, self.id).values()) def set_aggregates(self, aggregate_uuids, increment_generation=False): """Set the aggregate uuids associated with this resource provider. If an aggregate does not exist, one will be created using the provided uuid. The resource provider generation is incremented if and only if the increment_generation parameter is True. """ _set_aggregates(self._context, self, aggregate_uuids, increment_generation=increment_generation) def set_traits(self, traits): """Replaces the set of traits associated with the resource provider with the given list of Trait objects. :param traits: A list of Trait objects representing the traits to associate with the provider. """ _set_traits(self._context, self, traits) def increment_generation(self): """Increments this provider's generation value, supplying the currently-known generation. :raises placement.exception.ConcurrentUpdateDetected: if another thread updated the resource provider's view of its inventory or allocations in between the time when this object was originally read and the call to set the inventory. """ rp_gen = self.generation new_generation = rp_gen + 1 upd_stmt = _RP_TBL.update().where(sa.and_( _RP_TBL.c.id == self.id, _RP_TBL.c.generation == rp_gen)).values( generation=new_generation) res = self._context.session.execute(upd_stmt) if res.rowcount != 1: raise exception.ResourceProviderConcurrentUpdateDetected() self.generation = new_generation @db_api.placement_context_manager.writer def _create_in_db(self, context, updates): parent_id = None root_id = None # User supplied a parent, let's make sure it exists parent_uuid = updates.pop('parent_provider_uuid') if parent_uuid is not None: # Setting parent to ourselves doesn't make any sense if parent_uuid == self.uuid: raise exception.ObjectActionError( action='create', reason='parent provider UUID cannot be same as UUID. ' 'Please set parent provider UUID to None if ' 'there is no parent.') parent_ids = res_ctx.provider_ids_from_uuid(context, parent_uuid) if parent_ids is None: raise exception.ObjectActionError( action='create', reason='parent provider UUID does not exist.') parent_id = parent_ids.id root_id = parent_ids.root_id updates['root_provider_id'] = root_id updates['parent_provider_id'] = parent_id self.root_provider_uuid = parent_ids.root_uuid db_rp = models.ResourceProvider() db_rp.update(updates) context.session.add(db_rp) context.session.flush() self.id = db_rp.id self.generation = db_rp.generation if root_id is None: # User did not specify a parent when creating this provider, so the # root_provider_id needs to be set to this provider's newly-created # internal ID db_rp.root_provider_id = db_rp.id context.session.add(db_rp) context.session.flush() self.root_provider_uuid = self.uuid @staticmethod @db_api.placement_context_manager.writer def _delete(context, _id): # Do a quick check to see if the provider is a parent. If it is, don't # allow deleting the provider. Note that the foreign key constraint on # resource_providers.parent_provider_id will prevent deletion of the # parent within the transaction below. This is just a quick # short-circuit outside of the transaction boundary. if _has_child_providers(context, _id): raise exception.CannotDeleteParentResourceProvider() # Don't delete the resource provider if it has allocations. rp_allocations = context.session.query(models.Allocation).filter( models.Allocation.resource_provider_id == _id).count() if rp_allocations: raise exception.ResourceProviderInUse() # Delete any inventory associated with the resource provider query = context.session.query(models.Inventory) query = query.filter(models.Inventory.resource_provider_id == _id) query.delete(synchronize_session=False) # Delete any aggregate associations for the resource provider # The name substitution on the next line is needed to satisfy pep8 RPA_model = models.ResourceProviderAggregate context.session.query(RPA_model).filter( RPA_model.resource_provider_id == _id).delete() # delete any trait associations for the resource provider RPT_model = models.ResourceProviderTrait context.session.query(RPT_model).filter( RPT_model.resource_provider_id == _id).delete() # set root_provider_id to null to make deletion possible query = context.session.query(models.ResourceProvider) query = query.filter( models.ResourceProvider.id == _id, models.ResourceProvider.root_provider_id == _id) query.update({'root_provider_id': None}) # Now delete the RP record try: result = _delete_rp_record(context, _id) except sqla_exc.IntegrityError: # NOTE(jaypipes): Another thread snuck in and parented this # resource provider in between the above check for # _has_child_providers() and our attempt to delete the record raise exception.CannotDeleteParentResourceProvider() if not result: raise exception.NotFound() @db_api.placement_context_manager.writer def _update_in_db(self, context, id, updates, allow_reparenting): # A list of resource providers in the subtree of resource provider to # update subtree_rps = [] # The new root RP if changed new_root_id = None new_root_uuid = None if 'parent_provider_uuid' in updates: my_ids = res_ctx.provider_ids_from_uuid(context, self.uuid) parent_uuid = updates.pop('parent_provider_uuid') if parent_uuid is not None: parent_ids = res_ctx.provider_ids_from_uuid( context, parent_uuid) # User supplied a parent, let's make sure it exists if parent_ids is None: raise exception.ObjectActionError( action='create', reason='parent provider UUID does not exist.') if (my_ids.parent_id is not None and my_ids.parent_id != parent_ids.id and not allow_reparenting): raise exception.ObjectActionError( action='update', reason='re-parenting a provider is not currently ' 'allowed.') # So the user specified a new parent. We have to make sure # that the new parent is not a descendant of the # current RP to avoid a loop in the graph. It could be # easily checked by traversing the tree from the new parent # up to the root and see if we ever hit the current RP # along the way. However later we need to update every # descendant of the current RP with a possibly new root # so we go with the more expensive way and gather every # descendant for the current RP and check if the new # parent is part of that set. subtree_rps = self.get_subtree(context) subtree_rp_uuids = {rp.uuid for rp in subtree_rps} if parent_uuid in subtree_rp_uuids: raise exception.ObjectActionError( action='update', reason='creating loop in the provider tree is ' 'not allowed.') updates['root_provider_id'] = parent_ids.root_id updates['parent_provider_id'] = parent_ids.id self.root_provider_uuid = parent_ids.root_uuid new_root_id = parent_ids.root_id new_root_uuid = parent_ids.root_uuid else: if my_ids.parent_id is not None: if not allow_reparenting: raise exception.ObjectActionError( action='update', reason='un-parenting a provider is not currently ' 'allowed.') # we don't need to do loop detection but we still need to # collect the RPs from the subtree so that the new root # value is updated in the whole subtree below. subtree_rps = self.get_subtree(context) # this RP becomes a new root RP updates['root_provider_id'] = my_ids.id updates['parent_provider_id'] = None self.root_provider_uuid = my_ids.uuid new_root_id = my_ids.id new_root_uuid = my_ids.uuid db_rp = context.session.query(models.ResourceProvider).filter_by( id=id).first() db_rp.update(updates) context.session.add(db_rp) # We should also update the root providers of the resource providers # that are in our subtree for rp in subtree_rps: # If the parent is not updated, this clause is skipped since the # `subtree_rps` has no element. rp.root_provider_uuid = new_root_uuid db_rp = context.session.query( models.ResourceProvider).filter_by(id=rp.id).first() data = {'root_provider_id': new_root_id} db_rp.update(data) context.session.add(db_rp) try: context.session.flush() except sqla_exc.IntegrityError: # NOTE(jaypipes): Another thread snuck in and deleted the parent # for this resource provider in between the above check for a valid # parent provider and here... raise exception.ObjectActionError( action='update', reason='parent provider UUID does not exist.') @staticmethod @db_api.placement_context_manager.reader def _from_db_object(context, resource_provider, db_resource_provider): for field in ['id', 'uuid', 'name', 'generation', 'root_provider_uuid', 'parent_provider_uuid', 'updated_at', 'created_at']: setattr(resource_provider, field, db_resource_provider[field]) return resource_provider def get_subtree(self, context, rp_uuid_to_child_rps=None): """Return every RP from the same tree that is part of the subtree rooted at the current RP. :param context: the request context :param rp_uuid_to_child_rps: a dict of list of children ResourceProviders keyed by the UUID of their parent RP. If it is None then this dict is calculated locally. :return: a list of ResourceProvider objects """ # if we are at a start of a recursion then prepare some data structure if rp_uuid_to_child_rps is None: same_tree = get_all_by_filters( context, filters={'in_tree': self.uuid}) rp_uuid_to_child_rps = collections.defaultdict(set) for rp in same_tree: if rp.parent_provider_uuid: rp_uuid_to_child_rps[rp.parent_provider_uuid].add(rp) subtree = [self] for child_rp in rp_uuid_to_child_rps[self.uuid]: subtree.extend( child_rp.get_subtree(context, rp_uuid_to_child_rps)) return subtree @db_api.placement_context_manager.reader def _get_all_by_filters_from_db(context, filters): # Eg. filters can be: # filters = { # 'name': , # 'uuid': , # 'member_of': [[, ], # []] # 'forbidden_aggs': [, ] # 'resources': { # 'VCPU': 1, # 'MEMORY_MB': 1024 # }, # 'in_tree': , # 'required_traits': [{, ...}, {...}] # 'forbidden_traits': {, ...} # } if not filters: filters = {} else: # Since we modify the filters, copy them so that we don't modify # them in the calling program. filters = copy.deepcopy(filters) name = filters.pop('name', None) uuid = filters.pop('uuid', None) member_of = filters.pop('member_of', []) forbidden_aggs = filters.pop('forbidden_aggs', []) required_traits = filters.pop('required_traits', []) forbidden_traits = filters.pop('forbidden_traits', {}) resources = filters.pop('resources', {}) in_tree = filters.pop('in_tree', None) rp = sa.alias(_RP_TBL, name="rp") root_rp = sa.alias(_RP_TBL, name="root_rp") parent_rp = sa.alias(_RP_TBL, name="parent_rp") rp_to_root = sa.join( rp, root_rp, rp.c.root_provider_id == root_rp.c.id) rp_to_parent = sa.outerjoin( rp_to_root, parent_rp, rp.c.parent_provider_id == parent_rp.c.id) query = sa.select( rp.c.id, rp.c.uuid, rp.c.name, rp.c.generation, rp.c.updated_at, rp.c.created_at, root_rp.c.uuid.label("root_provider_uuid"), parent_rp.c.uuid.label("parent_provider_uuid"), ).select_from(rp_to_parent) if name: query = query.where(rp.c.name == name) if uuid: query = query.where(rp.c.uuid == uuid) if in_tree: # The 'in_tree' parameter is the UUID of a resource provider that # the caller wants to limit the returned providers to only those # within its "provider tree". So, we look up the resource provider # having the UUID specified by the 'in_tree' parameter and grab the # root_provider_id value of that record. We can then ask for only # those resource providers having a root_provider_id of that value. tree_ids = res_ctx.provider_ids_from_uuid(context, in_tree) if tree_ids is None: # List operations should simply return an empty list when a # non-existing resource provider UUID is given. return [] root_id = tree_ids.root_id query = query.where(rp.c.root_provider_id == root_id) if required_traits: # translate trait names to trait internal IDs while keeping the nested # structure required_traits = [ { context.trait_cache.id_from_string(trait) for trait in any_traits } for any_traits in required_traits ] rps_with_matching_traits = ( res_ctx.provider_ids_matching_required_traits( context, required_traits)) if not rps_with_matching_traits: return [] query = query.where(rp.c.id.in_(rps_with_matching_traits)) if forbidden_traits: trait_map = trait_obj.ids_from_names(context, forbidden_traits) trait_rps = res_ctx.get_provider_ids_having_any_trait( context, trait_map.values()) if trait_rps: query = query.where(~rp.c.id.in_(trait_rps)) if member_of: rps_in_aggs = res_ctx.provider_ids_matching_aggregates( context, member_of) if not rps_in_aggs: return [] query = query.where(rp.c.id.in_(rps_in_aggs)) if forbidden_aggs: rps_bad_aggs = res_ctx.provider_ids_matching_aggregates( context, [forbidden_aggs]) if rps_bad_aggs: query = query.where(~rp.c.id.in_(rps_bad_aggs)) for rc_name, amount in resources.items(): rc_id = context.rc_cache.id_from_string(rc_name) rps_with_resource = res_ctx.get_providers_with_resource( context, rc_id, amount) rps_with_resource = (rp[0] for rp in rps_with_resource) query = query.where(rp.c.id.in_(rps_with_resource)) return context.session.execute(query).fetchall() def get_all_by_filters(context, filters=None): """Returns a list of `ResourceProvider` objects that have sufficient resources in their inventories to satisfy the amounts specified in the `filters` parameter. If no resource providers can be found, the function will return an empty list. :param context: `placement.context.RequestContext` that may be used to grab a DB connection. :param filters: Can be `name`, `uuid`, `member_of`, `in_tree`, `required_traits`, `forbidden_traits`, or `resources` where `member_of` is a list of list of aggregate UUIDs, `required_traits` is a list of set of trait names, `forbidden_traits` is a set of trait names, `in_tree` is a UUID of a resource provider that we can use to find the root provider ID of the tree of providers to filter results by and `resources` is a dict of amounts keyed by resource classes. :type filters: dict """ resource_providers = _get_all_by_filters_from_db(context, filters) return [ ResourceProvider(context, **rp._mapping) for rp in resource_providers ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/rp_candidates.py0000664000175000017500000000746600000000000024425 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility methods for getting allocation candidates.""" import collections RPCandidate = collections.namedtuple('RPCandidates', 'id root_id rc_id') class RPCandidateList(object): """Helper class to manage allocation candidate resource providers list, RPCandidates, which consists of three-tuples with the first element being the resource provider ID, the second element being the root provider ID and the third being resource class ID. """ def __init__(self, rp_candidates=None): self.rp_candidates = rp_candidates or set() def __len__(self): return len(self.rp_candidates) def __bool__(self): return bool(len(self)) def __nonzero__(self): return self.__bool__() def merge_common_trees(self, other): """Merge two RPCandidateLists by OR'ing the two list of candidates and if the tree is not in both RPCandidateLists, we exclude resource providers in that tree. This is used to get trees that can satisfy all requested resource. """ if not self: self.rp_candidates = other.rp_candidates elif not other: pass else: trees_in_both = self.trees & other.trees self.rp_candidates |= other.rp_candidates self.filter_by_tree(trees_in_both) def add_rps(self, rps, rc_id): """Add given resource providers to the candidate list. :param rps: tuples of (resource provider ID, anchor root provider ID) :param rc_id: ID of the class of resource provided by these resource providers """ self.rp_candidates |= set( RPCandidate(id=rp[0], root_id=rp[1], rc_id=rc_id) for rp in rps) def filter_by_tree(self, tree_root_ids): """Filter the candidates by given trees""" self.rp_candidates = set( p for p in self.rp_candidates if p.root_id in tree_root_ids) def filter_by_rp(self, rptuples): """Filter the candidates by given resource provider""" self.rp_candidates = set( p for p in self.rp_candidates if (p.id, p.root_id) in rptuples) def filter_by_rp_or_tree(self, rp_ids): """Filter the candidates out if neither itself nor its root is in given resource providers """ self.rp_candidates = set( p for p in self.rp_candidates if set([p.id, p.root_id]) & rp_ids) def filter_by_rp_nor_tree(self, rp_ids): """Filter the candidates out if either itself or its root is in given resource providers """ self.rp_candidates = set( p for p in self.rp_candidates if not ( set([p.id, p.root_id]) & rp_ids)) @property def rps(self): """Returns a set of IDs of nominated resource providers""" return set(p.id for p in self.rp_candidates) @property def trees(self): """Returns a set of nominated trees each of which are expressed by the root provider ID """ return set(p.root_id for p in self.rp_candidates) @property def all_rps(self): """Returns a set of IDs of all involved resource providers""" return (self.rps | self.trees) @property def rps_info(self): return self.rp_candidates ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/trait.py0000664000175000017500000002514700000000000022744 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import os_traits from oslo_concurrency import lockutils from oslo_db import api as oslo_db_api from oslo_db import exception as db_exc from oslo_log import log as logging import sqlalchemy as sa from sqlalchemy.engine import row as sa_row from placement.db.sqlalchemy import models from placement import db_api from placement import exception _RP_TBL = models.ResourceProvider.__table__ _RP_TRAIT_TBL = models.ResourceProviderTrait.__table__ _TRAIT_TBL = models.Trait.__table__ _TRAIT_LOCK = 'trait_sync' _TRAITS_SYNCED = False LOG = logging.getLogger(__name__) class Trait(object): # All the user-defined traits must begin with this prefix. CUSTOM_NAMESPACE = 'CUSTOM_' def __init__(self, context, id=None, name=None, updated_at=None, created_at=None): self._context = context self.id = id self.name = name self.updated_at = updated_at self.created_at = created_at # FIXME(cdent): Duped from resource_class. @staticmethod def _from_db_object(context, target, source): target._context = context target.id = source['id'] target.name = source['name'] target.updated_at = source['updated_at'] target.created_at = source['created_at'] return target @staticmethod @db_api.placement_context_manager.writer def _create_in_db(context, updates): trait = models.Trait() trait.update(updates) context.session.add(trait) return trait def create(self): if self.id is not None: raise exception.ObjectActionError(action='create', reason='already created') if not self.name: raise exception.ObjectActionError(action='create', reason='name is required') # FIXME(cdent): duped from resource class updates = {} for field in ['name', 'updated_at', 'created_at']: value = getattr(self, field, None) if value: updates[field] = value try: db_trait = self._create_in_db(self._context, updates) except db_exc.DBDuplicateEntry: raise exception.TraitExists(name=self.name) self._from_db_object(self._context, self, db_trait) self._context.trait_cache.clear() @classmethod def get_by_name(cls, context, name): trait = context.trait_cache.all_from_string(name) return cls._from_db_object(context, cls(context), trait._asdict()) @staticmethod @db_api.placement_context_manager.writer def _destroy_in_db(context, _id, name): num = context.session.query(models.ResourceProviderTrait).filter( models.ResourceProviderTrait.trait_id == _id).count() if num: raise exception.TraitInUse(name=name) res = context.session.query(models.Trait).filter_by( name=name).delete() if not res: raise exception.TraitNotFound(name=name) def destroy(self): if not self.name: raise exception.ObjectActionError(action='destroy', reason='name is required') if not self.name.startswith(self.CUSTOM_NAMESPACE): raise exception.TraitCannotDeleteStandard(name=self.name) if self.id is None: raise exception.ObjectActionError(action='destroy', reason='ID attribute not found') self._destroy_in_db(self._context, self.id, self.name) self._context.trait_cache.clear() def ensure_sync(ctx): """Ensures that the os_traits library is synchronized to the traits db. If _TRAITS_SYNCED is False then this process has not tried to update the traits db. Do so by calling _trait_sync. Since the placement API server could be multi-threaded, lock around testing _TRAITS_SYNCED to avoid duplicating work. Different placement API server processes that talk to the same database will avoid issues through the power of transactions. :param ctx: `placement.context.RequestContext` that may be used to grab a DB connection. """ global _TRAITS_SYNCED # If another thread is doing this work, wait for it to complete. # When that thread is done _TRAITS_SYNCED will be true in this # thread and we'll simply return. with lockutils.lock(_TRAIT_LOCK): if not _TRAITS_SYNCED: _trait_sync(ctx) _TRAITS_SYNCED = True def get_all(context, filters=None): db_traits = _get_all_from_db(context, filters) # FIXME(stephenfin): This is necessary because our cached object type is # different from what we're getting from the database. We should use the # same result = [] for trait in db_traits: if isinstance(trait, sa_row.Row): result.append(Trait(context, **trait._mapping)) else: result.append(Trait(context, **trait)) return result def get_all_by_resource_provider(context, rp): """Returns a list containing Trait objects for any trait associated with the supplied resource provider. """ db_traits = get_traits_by_provider_id(context, rp.id) return [Trait(context, **data._mapping) for data in db_traits] @db_api.placement_context_manager.reader def get_traits_by_provider_id(context, rp_id): rp_traits_id = _RP_TRAIT_TBL.c.resource_provider_id trait_id = _RP_TRAIT_TBL.c.trait_id trait_cache = context.trait_cache sel = sa.select(trait_id).where(rp_traits_id == rp_id) return [ trait_cache.all_from_string(trait_cache.string_from_id(r.trait_id)) for r in context.session.execute(sel).fetchall()] @db_api.placement_context_manager.reader def get_traits_by_provider_tree(ctx, root_ids): """Returns a dict, keyed by provider IDs for all resource providers in all trees indicated in the ``root_ids``, of string trait names associated with that provider. :raises: ValueError when root_ids is empty. :param ctx: placement.context.RequestContext object :param root_ids: list of root resource provider IDs """ if not root_ids: raise ValueError("Expected root_ids to be a list of root resource " "provider internal IDs, but got an empty list.") rpt = sa.alias(_RP_TBL, name='rpt') rptt = sa.alias(_RP_TRAIT_TBL, name='rptt') rpt_rptt = sa.join(rpt, rptt, rpt.c.id == rptt.c.resource_provider_id) sel = sa.select(rptt.c.resource_provider_id, rptt.c.trait_id) sel = sel.select_from(rpt_rptt) sel = sel.where(rpt.c.root_provider_id.in_( sa.bindparam('root_ids', expanding=True))) res = collections.defaultdict(list) for r in ctx.session.execute(sel, {'root_ids': list(root_ids)}): res[r[0]].append(ctx.trait_cache.string_from_id(r[1])) return res def ids_from_names(ctx, names): """Given a list of string trait names, returns a dict, keyed by those string names, of the corresponding internal integer trait ID. :raises: ValueError when names is empty. :param ctx: placement.context.RequestContext object :param names: list of string trait names :raise TraitNotFound: if any named trait doesn't exist in the database. """ if not names: raise ValueError("Expected names to be a list of string trait " "names, but got an empty list.") return {name: ctx.trait_cache.id_from_string(name) for name in names} def _get_all_from_db(context, filters): # If no filters are required, returns everything from the cache. if not filters: return context.trait_cache.get_all() return _get_all_filtered_from_db(context, filters) @db_api.placement_context_manager.reader def _get_all_filtered_from_db(context, filters): query = context.session.query(models.Trait) if 'name_in' in filters: query = query.filter(models.Trait.name.in_( [str(n) for n in filters['name_in']] )) if 'prefix' in filters: query = query.filter( models.Trait.name.like(str(filters['prefix'] + '%'))) if 'associated' in filters: if filters['associated']: query = query.join( models.ResourceProviderTrait, models.Trait.id == models.ResourceProviderTrait.trait_id ).distinct() else: query = query.outerjoin( models.ResourceProviderTrait, models.Trait.id == models.ResourceProviderTrait.trait_id ).filter(models.ResourceProviderTrait.trait_id == sa.null()) return query.all() @oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True) # Bug #1760322: If the caller raises an exception, we don't want the trait # sync rolled back; so use an .independent transaction @db_api.placement_context_manager.writer def _trait_sync(ctx): """Sync the os_traits symbols to the database. Reads all symbols from the os_traits library, checks if any of them do not exist in the database and bulk-inserts those that are not. This is done once per web-service process, at startup. :param ctx: `placement.context.RequestContext` that may be used to grab a DB connection. """ # Create a set of all traits in the os_traits library. std_traits = set(os_traits.get_traits()) sel = sa.select(_TRAIT_TBL.c.name) res = ctx.session.execute(sel).fetchall() # Create a set of all traits in the db that are not custom # traits. db_traits = set( r[0] for r in res if not os_traits.is_custom(r[0]) ) # Determine those traits which are in os_traits but not # currently in the database, and insert them. need_sync = std_traits - db_traits ins = _TRAIT_TBL.insert() batch_args = [ {'name': str(trait)} for trait in need_sync ] if batch_args: try: ctx.session.execute(ins, batch_args) LOG.debug("Synced traits from os_traits into API DB: %s", need_sync) except db_exc.DBDuplicateEntry: pass # some other process sync'd, just ignore ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/usage.py0000664000175000017500000002140000000000000022711 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy as sa from sqlalchemy import distinct from sqlalchemy import func from sqlalchemy import sql from placement.db.sqlalchemy import models from placement import db_api from placement.objects import consumer_type as consumer_type_obj class Usage(object): def __init__(self, resource_class=None, usage=0, consumer_type=None, consumer_count=0): self.resource_class = resource_class self.usage = int(usage) self.consumer_type = (consumer_type or consumer_type_obj.NULL_CONSUMER_TYPE_ALIAS) self.consumer_count = int(consumer_count) def get_all_by_resource_provider_uuid(context, rp_uuid): """Get a list of Usage objects filtered by one resource provider.""" usage_list = _get_all_by_resource_provider_uuid(context, rp_uuid) return [Usage(**db_item) for db_item in usage_list] def get_by_consumer_type(context, project_id, user_id=None, consumer_type=None): """Get a list of Usage objects by consumer type.""" usage_list = _get_by_consumer_type(context, project_id, user_id=user_id, consumer_type=consumer_type) return [Usage(**db_item) for db_item in usage_list] def get_all_by_project_user(context, project_id, user_id=None): """Get a list of Usage objects filtered by project and (optional) user.""" usage_list = _get_all_by_project_user(context, project_id, user_id=user_id) return [Usage(**db_item) for db_item in usage_list] @db_api.placement_context_manager.reader def _get_all_by_resource_provider_uuid(context, rp_uuid): query = (context.session.query(models.Inventory.resource_class_id, func.coalesce(func.sum(models.Allocation.used), 0)) .join(models.ResourceProvider, models.Inventory.resource_provider_id == models.ResourceProvider.id) .outerjoin(models.Allocation, sql.and_(models.Inventory.resource_provider_id == models.Allocation.resource_provider_id, models.Inventory.resource_class_id == models.Allocation.resource_class_id)) .filter(models.ResourceProvider.uuid == rp_uuid) .group_by(models.Inventory.resource_class_id)) result = [dict(resource_class=context.rc_cache.string_from_id(item[0]), usage=item[1]) for item in query.all()] return result @db_api.placement_context_manager.reader def _get_all_by_project_user(context, project_id, user_id=None, consumer_type=None): """Get usages by project, user, and consumer type. When consumer_type is *not* "all" or "unknown", usages will be returned without regard to consumer type (behavior prior to microversion 1.38). :param context: `placement.context.RequestContext` that contains an oslo_db Session :param project_id: The project ID for which to get usages :param user_id: The optional user ID for which to get usages :param consumer_type: Optionally filter usages by consumer type, "all" or "unknown". If "all" is specified, all results will be grouped under one key, "all". If "unknown" is specified, all results will be grouped under one key, "unknown". """ query = (context.session.query(models.Allocation.resource_class_id, func.coalesce(func.sum(models.Allocation.used), 0)) .join(models.Consumer, models.Allocation.consumer_id == models.Consumer.uuid) .join(models.Project, models.Consumer.project_id == models.Project.id) .filter(models.Project.external_id == project_id)) if user_id: query = query.join(models.User, models.Consumer.user_id == models.User.id) query = query.filter(models.User.external_id == user_id) query = query.group_by(models.Allocation.resource_class_id) if consumer_type in ('all', 'unknown'): # NOTE(melwitt): We have to count the number of consumers in a separate # query in order to get a count of unique consumers. If we count in the # same query after grouping by resource class, we will count duplicate # consumers for any unique consumer that consumes more than one # resource class simultaneously (example: an instance consuming both # VCPU and MEMORY_MB). count_query = (context.session.query( func.count(distinct(models.Allocation.consumer_id))) .join(models.Consumer, models.Allocation.consumer_id == models.Consumer.uuid) .join(models.Project, models.Consumer.project_id == models.Project.id) .filter(models.Project.external_id == project_id)) if user_id: count_query = count_query.join( models.User, models.Consumer.user_id == models.User.id) count_query = count_query.filter( models.User.external_id == user_id) if consumer_type == 'unknown': count_query = count_query.filter( models.Consumer.consumer_type_id == sa.null()) number_of_unique_consumers = count_query.scalar() # Filter for unknown consumer type if specified. if consumer_type == 'unknown': query = query.filter(models.Consumer.consumer_type_id == sa.null()) result = [dict(resource_class=context.rc_cache.string_from_id(item[0]), usage=item[1], consumer_type=consumer_type, consumer_count=number_of_unique_consumers) for item in query.all()] else: result = [dict(resource_class=context.rc_cache.string_from_id(item[0]), usage=item[1]) for item in query.all()] return result @db_api.placement_context_manager.reader def _get_by_consumer_type(context, project_id, user_id=None, consumer_type=None): if consumer_type in ('all', 'unknown'): return _get_all_by_project_user(context, project_id, user_id, consumer_type=consumer_type) query = (context.session.query( models.Allocation.resource_class_id, func.coalesce(func.sum(models.Allocation.used), 0), func.count(distinct(models.Allocation.consumer_id)), models.ConsumerType.name) .join(models.Consumer, models.Allocation.consumer_id == models.Consumer.uuid) .outerjoin(models.ConsumerType, models.Consumer.consumer_type_id == models.ConsumerType.id) .join(models.Project, models.Consumer.project_id == models.Project.id) .filter(models.Project.external_id == project_id)) if user_id: query = query.join(models.User, models.Consumer.user_id == models.User.id) query = query.filter(models.User.external_id == user_id) if consumer_type: query = query.filter(models.ConsumerType.name == consumer_type) # NOTE(melwitt): We have to count grouped by only consumer type first in # order to get a count of unique consumers for a given consumer type. If we # only count after grouping by resource class, we will count duplicate # consumers for any unique consumer that consumes more than one resource # class simultaneously (example: an instance consuming both VCPU and # MEMORY_MB). unique_consumer_counts = {item[3]: item[2] for item in query.group_by(models.ConsumerType.name).all()} query = query.group_by(models.Allocation.resource_class_id, models.Consumer.consumer_type_id) result = [dict(resource_class=context.rc_cache.string_from_id(item[0]), usage=item[1], consumer_count=unique_consumer_counts[item[3]], consumer_type=item[3]) for item in query.all()] return result ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/objects/user.py0000664000175000017500000000577300000000000022602 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc import sqlalchemy as sa from placement.db.sqlalchemy import models from placement import db_api from placement import exception USER_TBL = models.User.__table__ @db_api.placement_context_manager.writer def ensure_incomplete_user(ctx): """Ensures that a user record is created for the "incomplete consumer user". Returns the internal ID of that record. """ incomplete_id = ctx.config.placement.incomplete_consumer_user_id sel = sa.select( USER_TBL.c.id, ).where( USER_TBL.c.external_id == incomplete_id ) res = ctx.session.execute(sel).fetchone() if res: return res[0] ins = USER_TBL.insert().values(external_id=incomplete_id) res = ctx.session.execute(ins) return res.inserted_primary_key[0] @db_api.placement_context_manager.reader def _get_user_by_external_id(ctx, external_id): users = sa.alias(USER_TBL, name="u") sel = sa.select( users.c.id, users.c.external_id, users.c.updated_at, users.c.created_at, ) sel = sel.where(users.c.external_id == external_id) res = ctx.session.execute(sel).fetchone() if not res: raise exception.UserNotFound(external_id=external_id) return dict(res._mapping) class User(object): def __init__(self, context, id=None, external_id=None, updated_at=None, created_at=None): self._context = context self.id = id self.external_id = external_id self.updated_at = updated_at self.created_at = created_at @staticmethod def _from_db_object(ctx, target, source): target._context = ctx target.id = source['id'] target.external_id = source['external_id'] target.updated_at = source['updated_at'] target.created_at = source['created_at'] return target @classmethod def get_by_external_id(cls, ctx, external_id): res = _get_user_by_external_id(ctx, external_id) return cls._from_db_object(ctx, cls(ctx), res) def create(self): @db_api.placement_context_manager.writer def _create_in_db(ctx): db_obj = models.User(external_id=self.external_id) try: db_obj.save(ctx.session) except db_exc.DBDuplicateEntry: raise exception.UserExists(external_id=self.external_id) self._from_db_object(ctx, self, db_obj) _create_in_db(self._context) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2527778 openstack_placement-13.0.0/placement/policies/0000775000175000017500000000000000000000000021414 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/__init__.py0000664000175000017500000000261500000000000023531 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from placement.policies import aggregate from placement.policies import allocation from placement.policies import allocation_candidate from placement.policies import base from placement.policies import inventory from placement.policies import reshaper from placement.policies import resource_class from placement.policies import resource_provider from placement.policies import trait from placement.policies import usage def list_rules(): rules = itertools.chain( base.list_rules(), resource_provider.list_rules(), resource_class.list_rules(), inventory.list_rules(), aggregate.list_rules(), usage.list_rules(), trait.list_rules(), allocation.list_rules(), allocation_candidate.list_rules(), reshaper.list_rules(), ) return list(rules) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/aggregate.py0000664000175000017500000000261000000000000023713 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from placement.policies import base PREFIX = 'placement:resource_providers:aggregates:%s' LIST = PREFIX % 'list' UPDATE = PREFIX % 'update' BASE_PATH = '/resource_providers/{uuid}/aggregates' rules = [ policy.DocumentedRuleDefault( LIST, base.ADMIN_OR_SERVICE, "List resource provider aggregates.", [ { 'method': 'GET', 'path': BASE_PATH } ], scope_types=['project'], ), policy.DocumentedRuleDefault( UPDATE, base.ADMIN_OR_SERVICE, "Update resource provider aggregates.", [ { 'method': 'PUT', 'path': BASE_PATH } ], scope_types=['project'], ), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/allocation.py0000664000175000017500000000510500000000000024114 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from placement.policies import base RP_ALLOC_LIST = 'placement:resource_providers:allocations:list' ALLOC_PREFIX = 'placement:allocations:%s' ALLOC_LIST = ALLOC_PREFIX % 'list' ALLOC_MANAGE = ALLOC_PREFIX % 'manage' ALLOC_UPDATE = ALLOC_PREFIX % 'update' ALLOC_DELETE = ALLOC_PREFIX % 'delete' rules = [ policy.DocumentedRuleDefault( name=ALLOC_MANAGE, check_str=base.ADMIN_OR_SERVICE, description="Manage allocations.", operations=[ { 'method': 'POST', 'path': '/allocations' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=ALLOC_LIST, check_str=base.ADMIN_OR_SERVICE, description="List allocations.", operations=[ { 'method': 'GET', 'path': '/allocations/{consumer_uuid}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=ALLOC_UPDATE, check_str=base.ADMIN_OR_SERVICE, description="Update allocations.", operations=[ { 'method': 'PUT', 'path': '/allocations/{consumer_uuid}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=ALLOC_DELETE, check_str=base.ADMIN_OR_SERVICE, description="Delete allocations.", operations=[ { 'method': 'DELETE', 'path': '/allocations/{consumer_uuid}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=RP_ALLOC_LIST, check_str=base.ADMIN_OR_SERVICE, description="List resource provider allocations.", operations=[ { 'method': 'GET', 'path': '/resource_providers/{uuid}/allocations' } ], scope_types=['project'], ), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/allocation_candidate.py0000664000175000017500000000205200000000000026106 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from placement.policies import base LIST = 'placement:allocation_candidates:list' rules = [ policy.DocumentedRuleDefault( name=LIST, check_str=base.ADMIN_OR_SERVICE, description="List allocation candidates.", operations=[ { 'method': 'GET', 'path': '/allocation_candidates' } ], scope_types=['project'], ) ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/base.py0000664000175000017500000000531700000000000022706 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import versionutils from oslo_policy import policy RULE_ADMIN_API = 'rule:admin_api' _DEPRECATED_REASON = """ Placement API policies are introducing new default roles with scope_type capabilities. Old policies are deprecated and silently going to be ignored in the placement 6.0.0 (Xena) release. """ DEPRECATED_ADMIN_POLICY = policy.DeprecatedRule( name=RULE_ADMIN_API, check_str='role:admin', deprecated_reason=_DEPRECATED_REASON, deprecated_since=versionutils.deprecated.WALLABY ) # NOTE(lbragstad): We might consider converting these generic checks into # RuleDefaults or DocumentedRuleDefaults, but we need to thoroughly vet the # approach in oslo.policy and consume a new version. Until we have that done, # let's continue using generic check strings. ADMIN_OR_SERVICE = 'rule:admin_or_service_api' SERVICE = 'rule:service_api' ADMIN_OR_PROJECT_READER_OR_SERVICE = ( 'rule:admin_or_project_reader_or_service_api') rules = [ policy.RuleDefault( "admin_api", "role:admin", description="Default rule for most placement APIs.", scope_types=['project'], ), policy.RuleDefault( "service_api", "role:service", description="Default rule for service-to-service placement APIs.", scope_types=['project'], deprecated_rule=DEPRECATED_ADMIN_POLICY, ), policy.RuleDefault( "admin_or_service_api", "role:admin or role:service", description="Default rule for most placement APIs.", scope_types=['project'], deprecated_rule=DEPRECATED_ADMIN_POLICY, ), policy.RuleDefault( name="project_reader_api", check_str="role:reader and project_id:%(project_id)s", description="Default rule for Project level reader APIs.", deprecated_rule=DEPRECATED_ADMIN_POLICY ), policy.RuleDefault( "admin_or_project_reader_or_service_api", "role:admin or rule:project_reader_api or role:service", description="Default rule for project level reader APIs.", scope_types=['project'], deprecated_rule=DEPRECATED_ADMIN_POLICY, ), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/inventory.py0000664000175000017500000000536200000000000024031 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from placement.policies import base PREFIX = 'placement:resource_providers:inventories:%s' LIST = PREFIX % 'list' CREATE = PREFIX % 'create' SHOW = PREFIX % 'show' UPDATE = PREFIX % 'update' DELETE = PREFIX % 'delete' BASE_PATH = '/resource_providers/{uuid}/inventories' rules = [ policy.DocumentedRuleDefault( name=LIST, check_str=base.ADMIN_OR_SERVICE, description="List resource provider inventories.", operations=[ { 'method': 'GET', 'path': BASE_PATH } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=CREATE, check_str=base.ADMIN_OR_SERVICE, description="Create one resource provider inventory.", operations=[ { 'method': 'POST', 'path': BASE_PATH } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=SHOW, check_str=base.ADMIN_OR_SERVICE, description="Show resource provider inventory.", operations=[ { 'method': 'GET', 'path': BASE_PATH + '/{resource_class}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=UPDATE, check_str=base.ADMIN_OR_SERVICE, description="Update resource provider inventory.", operations=[ { 'method': 'PUT', 'path': BASE_PATH }, { 'method': 'PUT', 'path': BASE_PATH + '/{resource_class}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=DELETE, check_str=base.ADMIN_OR_SERVICE, description="Delete resource provider inventory.", operations=[ { 'method': 'DELETE', 'path': BASE_PATH }, { 'method': 'DELETE', 'path': BASE_PATH + '/{resource_class}' } ], scope_types=['project'], ), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/reshaper.py0000664000175000017500000000201200000000000023572 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from placement.policies import base PREFIX = 'placement:reshaper:%s' RESHAPE = PREFIX % 'reshape' rules = [ policy.DocumentedRuleDefault( RESHAPE, base.SERVICE, "Reshape Inventory and Allocations.", [ { 'method': 'POST', 'path': '/reshaper' } ], scope_types=['project'], ), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/resource_class.py0000664000175000017500000000465400000000000025013 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from placement.policies import base PREFIX = 'placement:resource_classes:%s' LIST = PREFIX % 'list' CREATE = PREFIX % 'create' SHOW = PREFIX % 'show' UPDATE = PREFIX % 'update' DELETE = PREFIX % 'delete' rules = [ policy.DocumentedRuleDefault( name=LIST, check_str=base.ADMIN_OR_SERVICE, description="List resource classes.", operations=[ { 'method': 'GET', 'path': '/resource_classes' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=CREATE, check_str=base.ADMIN_OR_SERVICE, description="Create resource class.", operations=[ { 'method': 'POST', 'path': '/resource_classes' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=SHOW, check_str=base.ADMIN_OR_SERVICE, description="Show resource class.", operations=[ { 'method': 'GET', 'path': '/resource_classes/{name}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=UPDATE, check_str=base.ADMIN_OR_SERVICE, description="Update resource class.", operations=[ { 'method': 'PUT', 'path': '/resource_classes/{name}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=DELETE, check_str=base.ADMIN_OR_SERVICE, description="Delete resource class.", operations=[ { 'method': 'DELETE', 'path': '/resource_classes/{name}' } ], scope_types=['project'], ), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/resource_provider.py0000664000175000017500000000470600000000000025536 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from placement.policies import base PREFIX = 'placement:resource_providers:%s' LIST = PREFIX % 'list' CREATE = PREFIX % 'create' SHOW = PREFIX % 'show' UPDATE = PREFIX % 'update' DELETE = PREFIX % 'delete' rules = [ policy.DocumentedRuleDefault( name=LIST, check_str=base.ADMIN_OR_SERVICE, description="List resource providers.", operations=[ { 'method': 'GET', 'path': '/resource_providers' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=CREATE, check_str=base.ADMIN_OR_SERVICE, description="Create resource provider.", operations=[ { 'method': 'POST', 'path': '/resource_providers' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=SHOW, check_str=base.ADMIN_OR_SERVICE, description="Show resource provider.", operations=[ { 'method': 'GET', 'path': '/resource_providers/{uuid}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=UPDATE, check_str=base.ADMIN_OR_SERVICE, description="Update resource provider.", operations=[ { 'method': 'PUT', 'path': '/resource_providers/{uuid}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=DELETE, check_str=base.ADMIN_OR_SERVICE, description="Delete resource provider.", operations=[ { 'method': 'DELETE', 'path': '/resource_providers/{uuid}' } ], scope_types=['project'], ), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/trait.py0000664000175000017500000000647100000000000023121 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from placement.policies import base RP_TRAIT_PREFIX = 'placement:resource_providers:traits:%s' RP_TRAIT_LIST = RP_TRAIT_PREFIX % 'list' RP_TRAIT_UPDATE = RP_TRAIT_PREFIX % 'update' RP_TRAIT_DELETE = RP_TRAIT_PREFIX % 'delete' TRAITS_PREFIX = 'placement:traits:%s' TRAITS_LIST = TRAITS_PREFIX % 'list' TRAITS_SHOW = TRAITS_PREFIX % 'show' TRAITS_UPDATE = TRAITS_PREFIX % 'update' TRAITS_DELETE = TRAITS_PREFIX % 'delete' rules = [ policy.DocumentedRuleDefault( name=TRAITS_LIST, check_str=base.ADMIN_OR_SERVICE, description="List traits.", operations=[ { 'method': 'GET', 'path': '/traits' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=TRAITS_SHOW, check_str=base.ADMIN_OR_SERVICE, description="Show trait.", operations=[ { 'method': 'GET', 'path': '/traits/{name}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=TRAITS_UPDATE, check_str=base.ADMIN_OR_SERVICE, description="Update trait.", operations=[ { 'method': 'PUT', 'path': '/traits/{name}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=TRAITS_DELETE, check_str=base.ADMIN_OR_SERVICE, description="Delete trait.", operations=[ { 'method': 'DELETE', 'path': '/traits/{name}' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=RP_TRAIT_LIST, check_str=base.ADMIN_OR_SERVICE, description="List resource provider traits.", operations=[ { 'method': 'GET', 'path': '/resource_providers/{uuid}/traits' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=RP_TRAIT_UPDATE, check_str=base.ADMIN_OR_SERVICE, description="Update resource provider traits.", operations=[ { 'method': 'PUT', 'path': '/resource_providers/{uuid}/traits' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=RP_TRAIT_DELETE, check_str=base.ADMIN_OR_SERVICE, description="Delete resource provider traits.", operations=[ { 'method': 'DELETE', 'path': '/resource_providers/{uuid}/traits' } ], scope_types=['project'], ), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policies/usage.py0000664000175000017500000000321100000000000023067 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from placement.policies import base PROVIDER_USAGES = 'placement:resource_providers:usages' TOTAL_USAGES = 'placement:usages' rules = [ policy.DocumentedRuleDefault( name=PROVIDER_USAGES, check_str=base.ADMIN_OR_SERVICE, description="List resource provider usages.", operations=[ { 'method': 'GET', 'path': '/resource_providers/{uuid}/usages' } ], scope_types=['project'], ), policy.DocumentedRuleDefault( name=TOTAL_USAGES, # NOTE(gmann): Admin in any project (legacy admin) can get usage of # other project. Project member or reader roles can see usage of # their project only. check_str=base.ADMIN_OR_PROJECT_READER_OR_SERVICE, description="List total resource usages for a given project.", operations=[ { 'method': 'GET', 'path': '/usages' } ], scope_types=['project'], ), ] def list_rules(): return rules ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/policy.py0000664000175000017500000001115000000000000021454 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Policy Enforcement for placement API.""" import typing as ty from oslo_config import cfg from oslo_log import log as logging from oslo_policy import opts as policy_opts from oslo_policy import policy from oslo_utils import excutils from placement import exception from placement import policies LOG = logging.getLogger(__name__) _ENFORCER = None def reset(): """Used to reset the global _ENFORCER between test runs.""" global _ENFORCER if _ENFORCER: _ENFORCER.clear() _ENFORCER = None def init( conf: cfg.ConfigOpts, suppress_deprecation_warnings: bool = False, rules: ty.List[policy.RuleDefault] = None, ): """Init an Enforcer class. Sets the _ENFORCER global. :param conf: A ConfigOpts object to load configuration from. :param suppress_deprecation_warnings: **Test only** Suppress policy deprecation warnings to avoid polluting logs. :param rules: **Test only** The default rules to initialise. """ global _ENFORCER if not _ENFORCER: _enforcer = policy.Enforcer(conf) # NOTE(gmann): Explicitly disable the warnings for policies changing # their default check_str. During the policy-defaults-refresh work, all # the policy defaults have been changed and warnings for each policy # started filling the logs limit for various tool. # Once we move to new defaults only world then we can enable these # warnings again. _enforcer.suppress_default_change_warnings = True _enforcer.suppress_deprecation_warnings = suppress_deprecation_warnings _enforcer.register_defaults(rules or policies.list_rules()) _enforcer.load_rules() _ENFORCER = _enforcer def get_enforcer(): # This method is used by oslopolicy CLI scripts in order to generate policy # files from overrides on disk and defaults in code. We can just pass an # empty list and let oslo do the config lifting for us. cfg.CONF([], project='placement') # TODO(gmann): Remove setting the default value of config policy_file # once oslo_policy change the default value to 'policy.yaml'. # https://github.com/openstack/oslo.policy/blob/a626ad12fe5a3abd49d70e3e5b95589d279ab578/oslo_policy/opts.py#L49 policy_opts.set_defaults(cfg.CONF, 'policy.yaml') return _get_enforcer(cfg.CONF) def _get_enforcer(conf): init(conf) return _ENFORCER def authorize(context, action, target, do_raise=True): """Verifies that the action is valid on the target in this context. :param context: instance of placement.context.RequestContext :param action: string representing the action to be checked this should be colon separated for clarity, i.e. ``placement:resource_providers:list`` :param target: dictionary representing the object of the action; for object creation this should be a dictionary representing the owner of the object e.g. ``{'project_id': context.project_id}``. :param do_raise: if True (the default), raises PolicyNotAuthorized; if False, returns False :raises placement.exception.PolicyNotAuthorized: if verification fails and do_raise is True. :returns: non-False value (not necessarily "True") if authorized, and the exact value False if not authorized and do_raise is False. """ try: # NOTE(mriedem): The "action" kwarg is for the PolicyNotAuthorized exc. return _ENFORCER.authorize( action, target, context, do_raise=do_raise, exc=exception.PolicyNotAuthorized, action=action) except policy.PolicyNotRegistered: with excutils.save_and_reraise_exception(): LOG.exception('Policy not registered') except policy.InvalidScope: raise exception.PolicyNotAuthorized(action) except Exception: with excutils.save_and_reraise_exception(): credentials = context.to_policy_values() LOG.debug('Policy check for %(action)s failed with credentials ' '%(credentials)s', {'action': action, 'credentials': credentials}) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/requestlog.py0000664000175000017500000001063700000000000022360 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Simple middleware for request logging that also sets request id. We combine these two pieces of functionality in one middleware because we want to be sure that we have a DEBUG log at the very start of the request, with a a global request id, and an INFO log at the very end of the request. """ from oslo_context import context from oslo_log import log as logging from oslo_middleware import request_id import webob.dec from placement import microversion LOG = logging.getLogger(__name__) class RequestLog(request_id.RequestId): """WSGI Middleware to write a simple request log with a global request id. Borrowed from Paste Translogger and incorporating oslo_middleware.request_id.RequestId. This also guards against a missing "Accept" header. """ def __init__(self, application): self.application = application @webob.dec.wsgify def __call__(self, req): # This duplicates code from __call__ on RequestId, but because of the # way that method is structured, calling super is not workable. self.set_global_req_id(req) # We must instantiate a Request context, otherwise the LOG in the # next line will not produce the expected output where we would expect # to see request ids. Instead we get '[-]'. Presumably there be magic # here... ctx = context.RequestContext.from_environ(req.environ) req.environ[request_id.ENV_REQUEST_ID] = ctx.request_id LOG.debug('Starting request: %s "%s %s"', req.remote_addr, req.method, self._get_uri(req.environ)) # Set the accept header if it is not otherwise set or is '*/*'. This # ensures that error responses will be in JSON. accept = req.environ.get('HTTP_ACCEPT') if not accept or accept == '*/*': req.environ['HTTP_ACCEPT'] = 'application/json' if LOG.isEnabledFor(logging.INFO): response = req.get_response(self._log_app) else: response = req.get_response(self.application) return_headers = [request_id.HTTP_RESP_HEADER_REQUEST_ID] return_headers.extend(self.compat_headers) for header in return_headers: if header not in response.headers: response.headers.add(header, ctx.request_id) return response @staticmethod def _get_uri(environ): req_uri = (environ.get('SCRIPT_NAME', '') + environ.get('PATH_INFO', '')) if environ.get('QUERY_STRING'): req_uri += '?' + environ['QUERY_STRING'] return req_uri def _log_app(self, environ, start_response): req_uri = self._get_uri(environ) def replacement_start_response(status, headers, exc_info=None): """We need to gaze at the content-length, if set, to write log info. """ size = None for name, value in headers: if name.lower() == 'content-length': size = value self.write_log(environ, req_uri, status, size) return start_response(status, headers, exc_info) return self.application(environ, replacement_start_response) def write_log(self, environ, req_uri, status, size): """Write the log info out in a formatted form to ``LOG.info``. """ if size is None: size = '-' LOG.info('%(REMOTE_ADDR)s "%(REQUEST_METHOD)s %(REQUEST_URI)s" ' 'status: %(status)s len: %(bytes)s ' 'microversion: %(microversion)s', {'REMOTE_ADDR': environ.get('REMOTE_ADDR', '-'), 'REQUEST_METHOD': environ['REQUEST_METHOD'], 'REQUEST_URI': req_uri, 'status': status.split(None, 1)[0], 'bytes': size, 'microversion': environ.get( microversion.MICROVERSION_ENVIRON, '-')}) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/rest_api_version_history.rst0000664000175000017500000007140100000000000025476 0ustar00zuulzuul00000000000000REST API Version History ======================== This documents the changes made to the REST API with every microversion change. The description for each version should be a verbose one which has enough information to be suitable for use in user documentation. Newton ------ .. _1.0 (Maximum in Newton): 1.0 - Initial Version ~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Newton This is the initial version of the placement REST API that was released in Nova 14.0.0 (Newton). This contains the following routes: * ``/resource_providers`` * ``/resource_providers/allocations`` * ``/resource_providers/inventories`` * ``/resource_providers/usages`` * ``/allocations`` Ocata ----- 1.1 - Resource provider aggregates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Ocata The 1.1 version adds support for associating aggregates with resource providers. The following new operations are added: ``GET /resource_providers/{uuid}/aggregates`` Return all aggregates associated with a resource provider ``PUT /resource_providers/{uuid}/aggregates`` Update the aggregates associated with a resource provider 1.2 - Add custom resource classes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Ocata Placement API version 1.2 adds basic operations allowing an admin to create, list and delete custom resource classes. The following new routes are added: ``GET /resource_classes`` Return all resource classes ``POST /resource_classes`` Create a new custom resource class ``PUT /resource_classes/{name}`` Update the name of a custom resource class ``DELETE /resource_classes/{name}`` Delete a custom resource class ``GET /resource_classes/{name}`` Get a single resource class Custom resource classes must begin with the prefix ``CUSTOM_`` and contain only the letters A through Z, the numbers 0 through 9 and the underscore ``_`` character. 1.3 - 'member_of' query parameter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Ocata Version 1.3 adds support for listing resource providers that are members of any of the list of aggregates provided using a ``member_of`` query parameter:: ?member_of=in:{agg1_uuid},{agg2_uuid},{agg3_uuid} 1.4 - Filter resource providers by requested resource capacity ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Ocata The 1.4 version adds support for querying resource providers that have the ability to serve a requested set of resources. A new "resources" query string parameter is now accepted to the ``GET /resource_providers`` API call. This parameter indicates the requested amounts of various resources that a provider must have the capacity to serve. The "resources" query string parameter takes the form:: ?resources=$RESOURCE_CLASS_NAME:$AMOUNT,$RESOURCE_CLASS_NAME:$AMOUNT For instance, if the user wishes to see resource providers that can service a request for 2 vCPUs, 1024 MB of RAM and 50 GB of disk space, the user can issue a request to:: GET /resource_providers?resources=VCPU:2,MEMORY_MB:1024,DISK_GB:50 If the resource class does not exist, then it will return a HTTP 400. .. note:: The resources filtering is also based on the `min_unit`, `max_unit` and `step_size` of the inventory record. For example, if the `max_unit` is 512 for the DISK_GB inventory for a particular resource provider and a GET request is made for `DISK_GB:1024`, that resource provider will not be returned. The `min_unit` is the minimum amount of resource that can be requested for a given inventory and resource provider. The `step_size` is the increment of resource that can be requested for a given resource on a given provider. Pike ---- 1.5 - 'DELETE' all inventory for a resource provider ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Pike Placement API version 1.5 adds DELETE method for deleting all inventory for a resource provider. The following new method is supported: ``DELETE /resource_providers/{uuid}/inventories`` Delete all inventories for a given resource provider 1.6 - Traits API ~~~~~~~~~~~~~~~~ .. versionadded:: Pike The 1.6 version adds basic operations allowing an admin to create, list, and delete custom traits, also adds basic operations allowing an admin to attach traits to a resource provider. The following new routes are added: ``GET /traits`` Return all resource classes. ``PUT /traits/{name}`` Insert a single custom trait. ``GET /traits/{name}`` Check if a trait name exists. ``DELETE /traits/{name}`` Delete the specified trait. ``GET /resource_providers/{uuid}/traits`` Return all traits associated with a specific resource provider. ``PUT /resource_providers/{uuid}/traits`` Update all traits for a specific resource provider. ``DELETE /resource_providers/{uuid}/traits`` Remove any existing trait associations for a specific resource provider Custom traits must begin with the prefix ``CUSTOM_`` and contain only the letters A through Z, the numbers 0 through 9 and the underscore ``_`` character. 1.7 - Idempotent 'PUT /resource_classes/{name}' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Pike The 1.7 version changes handling of ``PUT /resource_classes/{name}`` to be a create or verification of the resource class with ``{name}``. If the resource class is a custom resource class and does not already exist it will be created and a ``201`` response code returned. If the class already exists the response code will be ``204``. This makes it possible to check or create a resource class in one request. 1.8 - Require placement 'project_id', 'user_id' in 'PUT /allocations' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Pike The 1.8 version adds ``project_id`` and ``user_id`` required request parameters to ``PUT /allocations``. 1.9 - Add 'GET /usages' ~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Pike The 1.9 version adds usages that can be queried by a project or project/user. The following new routes are added: ``GET /usages?project_id=`` Return all usages for a given project. ``GET /usages?project_id=&user_id=`` Return all usages for a given project and user. 1.10 - Allocation candidates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Pike The 1.10 version brings a new REST resource endpoint for getting a list of allocation candidates. Allocation candidates are collections of possible allocations against resource providers that can satisfy a particular request for resources. Queens ------ 1.11 - Add 'allocations' link to the 'GET /resource_providers' response ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Queens The ``/resource_providers/{rp_uuid}/allocations`` endpoint has been available since version 1.0, but was not listed in the ``links`` section of the ``GET /resource_providers`` response. The link is included as of version 1.11. 1.12 - 'PUT' dict format to '/allocations/{consumer_uuid}' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Queens In version 1.12 the request body of a ``PUT /allocations/{consumer_uuid}`` is expected to have an ``object`` for the ``allocations`` property, not as ``array`` as with earlier microversions. This puts the request body more in alignment with the structure of the ``GET /allocations/{consumer_uuid}`` response body. Because the ``PUT`` request requires ``user_id`` and ``project_id`` in the request body, these fields are added to the ``GET`` response. In addition, the response body for ``GET /allocation_candidates`` is updated so the allocations in the ``allocation_requests`` object work with the new ``PUT`` format. 1.13 - 'POST' multiple allocations to '/allocations' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Queens Version 1.13 gives the ability to set or clear allocations for more than one consumer UUID with a request to ``POST /allocations``. 1.14 - Add nested resource providers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Queens The 1.14 version introduces the concept of nested resource providers. The resource provider resource now contains two new attributes: * ``parent_provider_uuid`` indicates the provider's direct parent, or null if there is no parent. This attribute can be set in the call to ``POST /resource_providers`` and ``PUT /resource_providers/{uuid}`` if the attribute has not already been set to a non-NULL value (i.e. we do not support "reparenting" a provider) * ``root_provider_uuid`` indicates the UUID of the root resource provider in the provider's tree. This is a read-only attribute A new ``in_tree=`` parameter is now available in the ``GET /resource-providers`` API call. Supplying a UUID value for the ``in_tree`` parameter will cause all resource providers within the "provider tree" of the provider matching ```` to be returned. 1.15 - Add 'last-modified' and 'cache-control' headers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Queens Throughout the API, 'last-modified' headers have been added to GET responses and those PUT and POST responses that have bodies. The value is either the actual last modified time of the most recently modified associated database entity or the current time if there is no direct mapping to the database. In addition, 'cache-control: no-cache' headers are added where the 'last-modified' header has been added to prevent inadvertent caching of resources. 1.16 - Limit allocation candidates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Queens Add support for a ``limit`` query parameter when making a ``GET /allocation_candidates`` request. The parameter accepts an integer value, ``N``, which limits the maximum number of candidates returned. 1.17 - Add 'required' parameter to the allocation candidates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Queens Add the ``required`` parameter to the ``GET /allocation_candidates`` API. It accepts a list of traits separated by ``,``. The provider summary in the response will include the attached traits also. Rocky ----- 1.18 - Support '?required=' queryparam on 'GET /resource_providers' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky Add support for the ``required`` query parameter to the ``GET /resource_providers`` API. It accepts a comma-separated list of string trait names. When specified, the API results will be filtered to include only resource providers marked with all the specified traits. This is in addition to (logical AND) any filtering based on other query parameters. Trait names which are empty, do not exist, or are otherwise invalid will result in a 400 error. 1.19 - Include generation and conflict detection in provider aggregates APIs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky Enhance the payloads for the ``GET /resource_providers/{uuid}/aggregates`` response and the ``PUT /resource_providers/{uuid}/aggregates`` request and response to be identical, and to include the ``resource_provider_generation``. As with other generation-aware APIs, if the ``resource_provider_generation`` specified in the ``PUT`` request does not match the generation known by the server, a 409 Conflict error is returned. 1.20 - Return 200 with provider payload from 'POST /resource_providers' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky The ``POST /resource_providers`` API, on success, returns 200 with a payload representing the newly-created resource provider, in the same format as the corresponding ``GET /resource_providers/{uuid}`` call. This is to allow the caller to glean automatically-set fields, such as UUID and generation, without a subsequent GET. 1.21 - Support '?member_of=' queryparam on 'GET /allocation_candidates' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky Add support for the ``member_of`` query parameter to the ``GET /allocation_candidates`` API. It accepts a comma-separated list of UUIDs for aggregates. Note that if more than one aggregate UUID is passed, the comma-separated list must be prefixed with the "in:" operator. If this parameter is provided, the only resource providers returned will be those in one of the specified aggregates that meet the other parts of the request. 1.22 - Support forbidden traits on resource providers and allocations candidates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky Add support for expressing traits which are forbidden when filtering ``GET /resource_providers`` or ``GET /allocation_candidates``. A forbidden trait is a properly formatted trait in the existing ``required`` parameter, prefixed by a ``!``. For example ``required=!STORAGE_DISK_SSD`` asks that the results not include any resource providers that provide solid state disk. 1.23 - Include 'code' attribute in JSON error responses ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky JSON formatted error responses gain a new attribute, ``code``, with a value that identifies the type of this error. This can be used to distinguish errors that are different but use the same HTTP status code. Any error response which does not specifically define a code will have the code ``placement.undefined_code``. 1.24 - Support multiple '?member_of' queryparams ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky Add support for specifying multiple ``member_of`` query parameters to the ``GET /resource_providers`` API. When multiple ``member_of`` query parameters are found, they are AND'd together in the final query. For example, issuing a request for ``GET /resource_providers?member_of=agg1&member_of=agg2`` means get the resource providers that are associated with BOTH agg1 and agg2. Issuing a request for ``GET /resource_providers?member_of=in:agg1,agg2&member_of=agg3`` means get the resource providers that are associated with agg3 and are also associated with *any of* (agg1, agg2). 1.25 - Granular resource requests to 'GET /allocation_candidates' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky ``GET /allocation_candidates`` is enhanced to accept numbered groupings of resource, required/forbidden trait, and aggregate association requests. A ``resources`` query parameter key with a positive integer suffix (e.g. ``resources42``) will be logically associated with ``required`` and/or ``member_of`` query parameter keys with the same suffix (e.g. ``required42``, ``member_of42``). The resources, required/forbidden traits, and aggregate associations in that group will be satisfied by the same resource provider in the response. When more than one numbered grouping is supplied, the ``group_policy`` query parameter is required to indicate how the groups should interact. With ``group_policy=none``, separate groupings - numbered or unnumbered - may or may not be satisfied by the same provider. With ``group_policy=isolate``, numbered groups are guaranteed to be satisfied by *different* providers - though there may still be overlap with the unnumbered group. In all cases, each ``allocation_request`` will be satisfied by providers in a single non-sharing provider tree and/or sharing providers associated via aggregate with any of the providers in that tree. The ``required`` and ``member_of`` query parameters for a given group are optional. That is, you may specify ``resources42=XXX`` without a corresponding ``required42=YYY`` or ``member_of42=ZZZ``. However, the reverse (specifying ``required42=YYY`` or ``member_of42=ZZZ`` without ``resources42=XXX``) will result in an error. The semantic of the (unnumbered) ``resources``, ``required``, and ``member_of`` query parameters is unchanged: the resources, traits, and aggregate associations specified thereby may be satisfied by any provider in the same non-sharing tree or associated via the specified aggregate(s). 1.26 - Allow inventories to have reserved value equal to total ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky Starting with this version, it is allowed to set the reserved value of the resource provider inventory to be equal to total. 1.27 - Include all resource class inventories in 'provider_summaries' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky Include all resource class inventories in the ``provider_summaries`` field in response of the ``GET /allocation_candidates`` API even if the resource class is not in the requested resources. 1.28 - Consumer generation support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky A new generation field has been added to the consumer concept. Consumers are the actors that are allocated resources in the placement API. When an allocation is created, a consumer UUID is specified. Starting with microversion 1.8, a project and user ID are also required. If using microversions prior to 1.8, these are populated from the ``incomplete_consumer_project_id`` and ``incomplete_consumer_user_id`` config options from the ``[placement]`` section. The consumer generation facilitates safe concurrent modification of an allocation. A consumer generation is now returned from the following URIs: ``GET /resource_providers/{uuid}/allocations`` The response continues to be a dict with a key of ``allocations``, which itself is a dict, keyed by consumer UUID, of allocations against the resource provider. For each of those dicts, a ``consumer_generation`` field will now be shown. ``GET /allocations/{consumer_uuid}`` The response continues to be a dict with a key of ``allocations``, which itself is a dict, keyed by resource provider UUID, of allocations being consumed by the consumer with the ``{consumer_uuid}``. The top-level dict will also now contain a ``consumer_generation`` field. The value of the ``consumer_generation`` field is opaque and should only be used to send back to subsequent operations on the consumer's allocations. The ``PUT /allocations/{consumer_uuid}`` URI has been modified to now require a ``consumer_generation`` field in the request payload. This field is required to be ``null`` if the caller expects that there are no allocations already existing for the consumer. Otherwise, it should contain the generation that the caller understands the consumer to be at the time of the call. A ``409 Conflict`` will be returned from ``PUT /allocations/{consumer_uuid}`` if there was a mismatch between the supplied generation and the consumer's generation as known by the server. Similarly, a ``409 Conflict`` will be returned if during the course of replacing the consumer's allocations another process concurrently changed the consumer's allocations. This allows the caller to react to the concurrent write by re-reading the consumer's allocations and re-issuing the call to replace allocations as needed. The ``PUT /allocations/{consumer_uuid}`` URI has also been modified to accept an empty allocations object, thereby bringing it to parity with the behaviour of ``POST /allocations``, which uses an empty allocations object to indicate that the allocations for a particular consumer should be removed. Passing an empty allocations object along with a ``consumer_generation`` makes ``PUT /allocations/{consumer_uuid}`` a **safe** way to delete allocations for a consumer. The ``DELETE /allocations/{consumer_uuid}`` URI remains unsafe to call in deployments where multiple callers may simultaneously be attempting to modify a consumer's allocations. The ``POST /allocations`` URI variant has also been changed to require a ``consumer_generation`` field in the request payload **for each consumer involved in the request**. Similar responses to ``PUT /allocations/{consumer_uuid}`` are returned when any of the consumers generations conflict with the server's view of those consumers or if any of the consumers involved in the request are modified by another process. .. warning:: In all cases, it is absolutely **NOT SAFE** to create and modify allocations for a consumer using different microversions where one of the microversions is prior to 1.28. The only way to safely modify allocations for a consumer and satisfy expectations you have regarding the prior existence (or lack of existence) of those allocations is to always use microversion 1.28+ when calling allocations API endpoints. 1.29 - Support allocation candidates with nested resource providers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky Add support for nested resource providers with the following two features. 1) ``GET /allocation_candidates`` is aware of nested providers. Namely, when provider trees are present, ``allocation_requests`` in the response of ``GET /allocation_candidates`` can include allocations on combinations of multiple resource providers in the same tree. 2) ``root_provider_uuid`` and ``parent_provider_uuid`` are added to ``provider_summaries`` in the response of ``GET /allocation_candidates``. 1.30 - Provide a '/reshaper' resource ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Rocky Add support for a ``POST /reshaper`` resource that provides for atomically migrating resource provider inventories and associated allocations when some of the inventory moves from one resource provider to another, such as when a class of inventory moves from a parent provider to a new child provider. .. note:: This is a special operation that should only be used in rare cases of resource provider topology changing when inventory is in use. Only use this if you are really sure of what you are doing. Stein ----- .. The following fragment is referred from the stein prelude release note releasenotes/notes/stein-prelude-779b0dbfe65cf9ac.yaml .. _add-in-tree-queryparam-on-get-allocation-candidates-maximum-in-stein: 1.31 - Add 'in_tree' queryparam on 'GET /allocation_candidates' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Stein Add support for the ``in_tree`` query parameter to the ``GET /allocation_candidates`` API. It accepts a UUID for a resource provider. If this parameter is provided, the only resource providers returned will be those in the same tree with the given resource provider. The numbered syntax ``in_tree`` is also supported. This restricts providers satisfying the Nth granular request group to the tree of the specified provider. This may be redundant with other ``in_tree`` values specified in other groups (including the unnumbered group). However, it can be useful in cases where a specific resource (e.g. DISK_GB) needs to come from a specific sharing provider (e.g. shared storage). For example, a request for ``VCPU`` and ``VGPU`` resources from ``myhost`` and ``DISK_GB`` resources from ``sharing1`` might look like:: ?resources=VCPU:1&in_tree= &resources1=VGPU:1&in_tree1= &resources2=DISK_GB:100&in_tree2= Train ----- 1.32 - Support forbidden aggregates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Train Add support for forbidden aggregates in ``member_of`` queryparam in ``GET /resource_providers`` and ``GET /allocation_candidates``. Forbidden aggregates are prefixed with a ``!``. This negative expression can also be used in multiple ``member_of`` parameters:: ?member_of=in:,&member_of=&member_of=! would translate logically to "Candidate resource providers must be at least one of agg1 or agg2, definitely in agg3 and definitely *not* in agg4." We do NOT support ``!`` within the ``in:`` list:: ?member_of=in:,,! but we support ``!in:`` prefix:: ?member_of=!in:,, which is equivalent to:: ?member_of=!&member_of=!&member_of=!`` where candidate resource providers must not be in agg1, agg2, or agg3. 1.33 - Support string request group suffixes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Train The syntax for granular groupings of resource, required/forbidden trait, and aggregate association requests introduced in ``1.25`` has been extended to allow, in addition to numbers, strings from 1 to 64 characters in length consisting of a-z, A-Z, 0-9, ``_``, and ``-``. This is done to allow naming conventions (e.g., ``resources_COMPUTE`` and ``resources_NETWORK``) to emerge in situations where multiple services are collaborating to make requests. For example, in addition to the already supported:: resources42=XXX&required42=YYY&member_of42=ZZZ it is now possible to use more complex strings, including UUIDs:: resources_PORT_fccc7adb-095e-4bfd-8c9b-942f41990664=XXX &required_PORT_fccc7adb-095e-4bfd-8c9b-942f41990664=YYY &member_of_PORT_fccc7adb-095e-4bfd-8c9b-942f41990664=ZZZ 1.34 - Request group mappings in allocation candidates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Train The body of the response to a ``GET /allocation_candidates`` request has been extended to include a ``mappings`` field with each allocation request. The value is a dictionary associating request group suffixes with the uuids of those resource providers that satisfy the identified request group. For convenience, this mapping can be included in the request payload for ``POST /allocations``, ``PUT /allocations/{consumer_uuid}``, and ``POST /reshaper``, but it will be ignored. 1.35 - Support 'root_required' queryparam on GET /allocation_candidates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Train Add support for the ``root_required`` query parameter to the ``GET /allocation_candidates`` API. It accepts a comma-delimited list of trait names, each optionally prefixed with ``!`` to indicate a forbidden trait, in the same format as the ``required`` query parameter. This restricts allocation requests in the response to only those whose (non-sharing) tree's root resource provider satisfies the specified trait requirements. See :ref:`filtering by root provider traits` for details. 1.36 - Support 'same_subtree' queryparam on GET /allocation_candidates ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Train Add support for the ``same_subtree`` query parameter to the ``GET /allocation_candidates`` API. It accepts a comma-separated list of request group suffix strings $S. Each must exactly match a suffix on a granular group somewhere else in the request. Importantly, the identified request groups need not have a resources$S. If this is provided, at least one of the resource providers satisfying a specified request group must be an ancestor of the rest. The ``same_subtree`` query parameter can be repeated and each repeat group is treated independently. Xena ---- 1.37 - Allow re-parenting and un-parenting via PUT /resource_providers/{uuid} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Xena Add support for re-parenting and un-parenting a resource provider via ``PUT /resource_providers/{uuid}`` API by allowing changing the ``parent_provider_uuid`` to any existing provider, except providers in same subtree. Un-parenting can be achieved by setting the ``parent_provider_uuid`` to ``null``. This means that the provider becomes a new root provider. 1.38 - Support consumer_type in allocations, usage and reshaper ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Xena Adds support for a ``consumer_type`` (required) key in the request body of ``POST /allocations``, ``PUT /allocations/{consumer_uuid}`` and in the response of ``GET /allocations/{consumer_uuid}``. ``GET /usages`` requests gain a ``consumer_type`` key as an optional query parameter to filter usages based on consumer_types. ``GET /usages`` response will group results based on the consumer type and will include a new ``consumer_count`` key per type irrespective of whether the ``consumer_type`` was specified in the request. If an ``all`` ``consumer_type`` key is provided, all results are grouped under one key, ``all``. Older allocations which were not created with a consumer type are considered to have an ``unknown`` ``consumer_type``. If an ``unknown`` ``consumer_type`` key is provided, all results are grouped under one key, ``unknown``. The corresponding changes to ``POST /reshaper`` are included. 1.39 - Support for the any-traits syntax in the ``required`` parameter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: Yoga Adds support for the ``in:`` syntax in the ``required`` query parameter in the ``GET /resource_providers`` API as well as to the ``required`` and ``requiredN`` query params of the ``GET /allocation_candidates`` API. Also adds support for repeating the ``required`` and ``requiredN`` parameters in the respective APIs. So:: required=in:T3,T4&required=T1,!T2 is supported and it means T1 and not T2 and (T3 or T4). ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2567778 openstack_placement-13.0.0/placement/schemas/0000775000175000017500000000000000000000000021230 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/__init__.py0000664000175000017500000000000000000000000023327 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/aggregate.py0000664000175000017500000000225700000000000023536 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Aggregate schemas for Placement API.""" import copy _AGGREGATES_LIST_SCHEMA = { "type": "array", "items": { "type": "string", "format": "uuid" }, "uniqueItems": True } PUT_AGGREGATES_SCHEMA_V1_1 = copy.deepcopy(_AGGREGATES_LIST_SCHEMA) PUT_AGGREGATES_SCHEMA_V1_19 = { "type": "object", "properties": { "aggregates": copy.deepcopy(_AGGREGATES_LIST_SCHEMA), "resource_provider_generation": { "type": "integer", } }, "required": [ "aggregates", "resource_provider_generation", ], "additionalProperties": False, } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/allocation.py0000664000175000017500000001670300000000000023736 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for setting and deleting allocations.""" import copy from placement.schemas import common ALLOCATION_SCHEMA = { "type": "object", "properties": { "allocations": { "type": "array", "minItems": 1, "items": { "type": "object", "properties": { "resource_provider": { "type": "object", "properties": { "uuid": { "type": "string", "format": "uuid" } }, "additionalProperties": False, "required": ["uuid"] }, "resources": { "type": "object", "minProperties": 1, "patternProperties": { common.RC_PATTERN: { "type": "integer", "minimum": 1, } }, "additionalProperties": False } }, "required": [ "resource_provider", "resources" ], "additionalProperties": False } } }, "required": ["allocations"], "additionalProperties": False } ALLOCATION_SCHEMA_V1_8 = copy.deepcopy(ALLOCATION_SCHEMA) ALLOCATION_SCHEMA_V1_8['properties']['project_id'] = {'type': 'string', 'minLength': 1, 'maxLength': 255} ALLOCATION_SCHEMA_V1_8['properties']['user_id'] = {'type': 'string', 'minLength': 1, 'maxLength': 255} ALLOCATION_SCHEMA_V1_8['required'].extend(['project_id', 'user_id']) # Update the allocation schema to achieve symmetry with the representation # used when GET /allocations/{consumer_uuid} is called. # NOTE(cdent): Explicit duplication here for sake of comprehensibility. ALLOCATION_SCHEMA_V1_12 = { "type": "object", "properties": { "allocations": { "type": "object", "minProperties": 1, # resource provider uuid "patternProperties": { common.UUID_PATTERN: { "type": "object", "properties": { # generation is optional "generation": { "type": "integer", }, "resources": { "type": "object", "minProperties": 1, # resource class "patternProperties": { common.RC_PATTERN: { "type": "integer", "minimum": 1, } }, "additionalProperties": False } }, "required": ["resources"], "additionalProperties": False } }, "additionalProperties": False }, "project_id": { "type": "string", "minLength": 1, "maxLength": 255 }, "user_id": { "type": "string", "minLength": 1, "maxLength": 255 } }, "additionalProperties": False, "required": [ "allocations", "project_id", "user_id" ] } # POST to /allocations, added in microversion 1.13, uses the # POST_ALLOCATIONS_V1_13 schema to allow multiple allocations # from multiple consumers in one request. It is a dict, keyed by # consumer uuid, using the form of PUT allocations from microversion # 1.12. In POST the allocations can be empty, so DELETABLE_ALLOCATIONS # modifies ALLOCATION_SCHEMA_V1_12 accordingly. DELETABLE_ALLOCATIONS = copy.deepcopy(ALLOCATION_SCHEMA_V1_12) DELETABLE_ALLOCATIONS['properties']['allocations']['minProperties'] = 0 POST_ALLOCATIONS_V1_13 = { "type": "object", "minProperties": 1, "additionalProperties": False, "patternProperties": { common.UUID_PATTERN: DELETABLE_ALLOCATIONS } } # A required consumer generation was added to the top-level dict in this # version of PUT /allocations/{consumer_uuid}. In addition, the PUT # /allocations/{consumer_uuid}/now allows for empty allocations (indicating the # allocations are being removed) ALLOCATION_SCHEMA_V1_28 = copy.deepcopy(DELETABLE_ALLOCATIONS) ALLOCATION_SCHEMA_V1_28['properties']['consumer_generation'] = { "type": ["integer", "null"], "additionalProperties": False } ALLOCATION_SCHEMA_V1_28['required'].append("consumer_generation") # A required consumer generation was added to the allocations dicts in this # version of POST /allocations REQUIRED_GENERATION_ALLOCS_POST = copy.deepcopy(DELETABLE_ALLOCATIONS) alloc_props = REQUIRED_GENERATION_ALLOCS_POST['properties'] alloc_props['consumer_generation'] = { "type": ["integer", "null"], "additionalProperties": False } REQUIRED_GENERATION_ALLOCS_POST['required'].append("consumer_generation") POST_ALLOCATIONS_V1_28 = copy.deepcopy(POST_ALLOCATIONS_V1_13) POST_ALLOCATIONS_V1_28["patternProperties"] = { common.UUID_PATTERN: REQUIRED_GENERATION_ALLOCS_POST } # Microversion 1.34 allows an optional mappings object which associates # request group suffixes with lists of resource provider uuids. mappings_schema = { "type": "object", "minProperties": 1, "patternProperties": { common.GROUP_PAT_1_33: { "type": "array", "minItems": 1, "items": { "type": "string", "format": "uuid" } } } } ALLOCATION_SCHEMA_V1_34 = copy.deepcopy(ALLOCATION_SCHEMA_V1_28) ALLOCATION_SCHEMA_V1_34['properties']['mappings'] = mappings_schema POST_ALLOCATIONS_V1_34 = copy.deepcopy(POST_ALLOCATIONS_V1_28) POST_ALLOCATIONS_V1_34["patternProperties"] = { common.UUID_PATTERN: ALLOCATION_SCHEMA_V1_34 } # A required consumer type was added to the allocations dicts in this # version of PUT /allocations/{consumer_uuid} and POST /allocations. ALLOCATION_SCHEMA_V1_38 = copy.deepcopy(ALLOCATION_SCHEMA_V1_34) ALLOCATION_SCHEMA_V1_38['properties']['consumer_type'] = { "type": "string", "pattern": common.CONSUMER_TYPE_PATTERN, "minLength": 1, "maxLength": 255, } ALLOCATION_SCHEMA_V1_38['required'].append("consumer_type") POST_ALLOCATIONS_V1_38 = copy.deepcopy(POST_ALLOCATIONS_V1_34) POST_ALLOCATIONS_V1_38["patternProperties"] = { common.UUID_PATTERN: ALLOCATION_SCHEMA_V1_38 } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/allocation_candidate.py0000664000175000017500000000624600000000000025733 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for getting allocation candidates.""" import copy from placement.schemas import common # Represents the allowed query string parameters to the GET # /allocation_candidates API call GET_SCHEMA_1_10 = { "type": "object", "properties": { "resources": { "type": "string" }, }, "required": [ "resources", ], "additionalProperties": False, } # Add limit query parameter. GET_SCHEMA_1_16 = copy.deepcopy(GET_SCHEMA_1_10) GET_SCHEMA_1_16['properties']['limit'] = { # A query parameter is always a string in webOb, but # we'll handle integer here as well. "type": ["integer", "string"], "pattern": "^[1-9][0-9]*$", "minimum": 1, "minLength": 1 } # Add required parameter. GET_SCHEMA_1_17 = copy.deepcopy(GET_SCHEMA_1_16) GET_SCHEMA_1_17['properties']['required'] = { "type": ["string"] } # Add member_of parameter. GET_SCHEMA_1_21 = copy.deepcopy(GET_SCHEMA_1_17) GET_SCHEMA_1_21['properties']['member_of'] = { "type": ["string"] } GET_SCHEMA_1_25 = copy.deepcopy(GET_SCHEMA_1_21) # We're going to *replace* 'resources', 'required', and 'member_of'. del GET_SCHEMA_1_25["properties"]["resources"] del GET_SCHEMA_1_25["required"] del GET_SCHEMA_1_25["properties"]["required"] del GET_SCHEMA_1_25["properties"]["member_of"] # Pattern property key format for a numbered or un-numbered grouping _GROUP_PAT_FMT = "^%s(" + common.GROUP_PAT + ")?$" GET_SCHEMA_1_25["patternProperties"] = { _GROUP_PAT_FMT % "resources": { "type": "string", }, _GROUP_PAT_FMT % "required": { "type": "string", }, _GROUP_PAT_FMT % "member_of": { "type": "string", }, } GET_SCHEMA_1_25["properties"]["group_policy"] = { "type": "string", "enum": ["none", "isolate"], } # Add in_tree parameter. GET_SCHEMA_1_31 = copy.deepcopy(GET_SCHEMA_1_25) GET_SCHEMA_1_31["patternProperties"][_GROUP_PAT_FMT % "in_tree"] = { "type": "string"} # Microversion 1.33 allows more complex resource group suffixes. GET_SCHEMA_1_33 = copy.deepcopy(GET_SCHEMA_1_31) _GROUP_PAT_FMT_1_33 = "^%s(" + common.GROUP_PAT_1_33 + ")?$" GET_SCHEMA_1_33["patternProperties"] = { _GROUP_PAT_FMT_1_33 % group_type: {"type": "string"} for group_type in ('resources', 'required', 'member_of', 'in_tree')} # Microversion 1.35 supports root_required. GET_SCHEMA_1_35 = copy.deepcopy(GET_SCHEMA_1_33) GET_SCHEMA_1_35["properties"]['root_required'] = { "type": ["string"] } # Microversion 1.36 supports same_subtree. GET_SCHEMA_1_36 = copy.deepcopy(GET_SCHEMA_1_35) GET_SCHEMA_1_36["properties"]['same_subtree'] = { "type": ["string"] } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/common.py0000664000175000017500000000255200000000000023076 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. _UUID_CHAR = "[0-9a-fA-F-]" # TODO(efried): Use this stricter pattern, and replace string/uuid with it: # UUID_PATTERN = "^%s{8}-%s{4}-%s{4}-%s{4}-%s{12}$" % ((_UUID_CHAR,) * 5) UUID_PATTERN = "^%s{36}$" % _UUID_CHAR _RC_TRAIT_CHAR = "[A-Z0-9_]" _RC_TRAIT_PATTERN = "^%s+$" % _RC_TRAIT_CHAR RC_PATTERN = _RC_TRAIT_PATTERN _CUSTOM_RC_TRAIT_PATTERN = "^CUSTOM_%s+$" % _RC_TRAIT_CHAR CUSTOM_RC_PATTERN = _CUSTOM_RC_TRAIT_PATTERN CUSTOM_TRAIT_PATTERN = _CUSTOM_RC_TRAIT_PATTERN CONSUMER_TYPE_PATTERN = _RC_TRAIT_PATTERN CONSUMER_TYPE_GET_PATTERN = "%s|^all|^unknown$" % CONSUMER_TYPE_PATTERN # The suffix used with request groups. Prior to 1.33, the group were numbered. # With 1.33 they become alphanumeric, '_', and '-' with a length limit of 64. GROUP_PAT = r'[1-9][0-9]*' GROUP_PAT_1_33 = r'[a-zA-Z0-9_-]{1,64}' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/inventory.py0000664000175000017500000000516500000000000023646 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Inventory schemas for Placement API.""" import copy from placement.db import constants as db_const from placement.schemas import common BASE_INVENTORY_SCHEMA = { "type": "object", "properties": { "resource_provider_generation": { "type": "integer" }, "total": { "type": "integer", "maximum": db_const.MAX_INT, "minimum": 1, }, "reserved": { "type": "integer", "maximum": db_const.MAX_INT, "minimum": 0, }, "min_unit": { "type": "integer", "maximum": db_const.MAX_INT, "minimum": 1 }, "max_unit": { "type": "integer", "maximum": db_const.MAX_INT, "minimum": 1 }, "step_size": { "type": "integer", "maximum": db_const.MAX_INT, "minimum": 1 }, "allocation_ratio": { "type": "number", "maximum": db_const.SQL_SP_FLOAT_MAX }, }, "required": [ "total", "resource_provider_generation" ], "additionalProperties": False } POST_INVENTORY_SCHEMA = copy.deepcopy(BASE_INVENTORY_SCHEMA) POST_INVENTORY_SCHEMA['properties']['resource_class'] = { "type": "string", "pattern": common.RC_PATTERN, } POST_INVENTORY_SCHEMA['required'].append('resource_class') POST_INVENTORY_SCHEMA['required'].remove('resource_provider_generation') PUT_INVENTORY_RECORD_SCHEMA = copy.deepcopy(BASE_INVENTORY_SCHEMA) PUT_INVENTORY_RECORD_SCHEMA['required'].remove('resource_provider_generation') PUT_INVENTORY_SCHEMA = { "type": "object", "properties": { "resource_provider_generation": { "type": "integer" }, "inventories": { "type": "object", "patternProperties": { common.RC_PATTERN: PUT_INVENTORY_RECORD_SCHEMA, } } }, "required": [ "resource_provider_generation", "inventories" ], "additionalProperties": False } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/reshaper.py0000664000175000017500000000416200000000000023416 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Reshaper schema for Placement API.""" import copy from placement.schemas import allocation from placement.schemas import common from placement.schemas import inventory ALLOCATIONS = copy.deepcopy(allocation.POST_ALLOCATIONS_V1_28) # In the reshaper we need to allow allocations to be an empty dict # because it may be the case that there simply are no allocations # (now) for any of the inventory being moved. ALLOCATIONS['minProperties'] = 0 POST_RESHAPER_SCHEMA = { "type": "object", "properties": { "inventories": { "type": "object", "patternProperties": { # resource provider uuid common.UUID_PATTERN: inventory.PUT_INVENTORY_SCHEMA, }, # We expect at least one inventories, otherwise there is no reason # to call the reshaper. "minProperties": 1, "additionalProperties": False, }, "allocations": ALLOCATIONS, }, "required": [ "inventories", "allocations", ], "additionalProperties": False, } POST_RESHAPER_SCHEMA_V1_34 = copy.deepcopy(POST_RESHAPER_SCHEMA) ALLOCATIONS_V1_34 = copy.deepcopy(allocation.POST_ALLOCATIONS_V1_34) ALLOCATIONS_V1_34['minProperties'] = 0 POST_RESHAPER_SCHEMA_V1_34['properties']['allocations'] = ALLOCATIONS_V1_34 POST_RESHAPER_SCHEMA_V1_38 = copy.deepcopy(POST_RESHAPER_SCHEMA_V1_34) ALLOCATIONS_V1_38 = copy.deepcopy(allocation.POST_ALLOCATIONS_V1_38) ALLOCATIONS_V1_38['minProperties'] = 0 POST_RESHAPER_SCHEMA_V1_38['properties']['allocations'] = ALLOCATIONS_V1_38 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/resource_class.py0000664000175000017500000000177300000000000024626 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for resource classes.""" import copy from placement.schemas import common POST_RC_SCHEMA_V1_2 = { "type": "object", "properties": { "name": { "type": "string", "pattern": common.CUSTOM_RC_PATTERN, "maxLength": 255, }, }, "required": [ "name" ], "additionalProperties": False, } PUT_RC_SCHEMA_V1_2 = copy.deepcopy(POST_RC_SCHEMA_V1_2) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/resource_provider.py0000664000175000017500000000715300000000000025351 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for resource providers.""" import copy POST_RESOURCE_PROVIDER_SCHEMA = { "type": "object", "properties": { "name": { "type": "string", "maxLength": 200 }, "uuid": { "type": "string", "format": "uuid" } }, "required": [ "name" ], "additionalProperties": False, } # Remove uuid to create the schema for PUTting a resource provider PUT_RESOURCE_PROVIDER_SCHEMA = copy.deepcopy(POST_RESOURCE_PROVIDER_SCHEMA) PUT_RESOURCE_PROVIDER_SCHEMA['properties'].pop('uuid') # Placement API microversion 1.14 adds an optional parent_provider_uuid field # to the POST and PUT request schemas POST_RP_SCHEMA_V1_14 = copy.deepcopy(POST_RESOURCE_PROVIDER_SCHEMA) POST_RP_SCHEMA_V1_14["properties"]["parent_provider_uuid"] = { "anyOf": [ { "type": "string", "format": "uuid", }, { "type": "null", } ] } PUT_RP_SCHEMA_V1_14 = copy.deepcopy(POST_RP_SCHEMA_V1_14) PUT_RP_SCHEMA_V1_14['properties'].pop('uuid') # Represents the allowed query string parameters to the GET /resource_providers # API call GET_RPS_SCHEMA_1_0 = { "type": "object", "properties": { "name": { "type": "string" }, "uuid": { "type": "string", "format": "uuid" } }, "additionalProperties": False, } # Placement API microversion 1.3 adds support for a member_of attribute GET_RPS_SCHEMA_1_3 = copy.deepcopy(GET_RPS_SCHEMA_1_0) GET_RPS_SCHEMA_1_3['properties']['member_of'] = { "type": "string" } # Placement API microversion 1.4 adds support for requesting resource providers # having some set of capacity for some resources. The query string is a # comma-delimited set of "$RESOURCE_CLASS_NAME:$AMOUNT" strings. The validation # of the string is left up to the helper code in the # normalize_resources_qs_param() function. GET_RPS_SCHEMA_1_4 = copy.deepcopy(GET_RPS_SCHEMA_1_3) GET_RPS_SCHEMA_1_4['properties']['resources'] = { "type": "string" } # Placement API microversion 1.14 adds support for requesting resource # providers within a tree of providers. The 'in_tree' query string parameter # should be the UUID of a resource provider. The result of the GET call will # include only those resource providers in the same "provider tree" as the # provider with the UUID represented by 'in_tree' GET_RPS_SCHEMA_1_14 = copy.deepcopy(GET_RPS_SCHEMA_1_4) GET_RPS_SCHEMA_1_14['properties']['in_tree'] = { "type": "string", "format": "uuid", } # Microversion 1.18 adds support for the `required` query parameter to the # `GET /resource_providers` API. It accepts a comma-separated list of string # trait names. When specified, the API results will be filtered to include only # resource providers marked with all the specified traits. This is in addition # to (logical AND) any filtering based on other query parameters. GET_RPS_SCHEMA_1_18 = copy.deepcopy(GET_RPS_SCHEMA_1_14) GET_RPS_SCHEMA_1_18['properties']['required'] = { "type": "string", } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/trait.py0000664000175000017500000000307400000000000022731 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Trait schemas for Placement API.""" import copy from placement.schemas import common TRAIT = { "type": "string", 'minLength': 1, 'maxLength': 255, } CUSTOM_TRAIT = copy.deepcopy(TRAIT) CUSTOM_TRAIT.update({"pattern": common.CUSTOM_TRAIT_PATTERN}) PUT_TRAITS_SCHEMA = { "type": "object", "properties": { "traits": { "type": "array", "items": CUSTOM_TRAIT, } }, 'required': ['traits'], 'additionalProperties': False } SET_TRAITS_FOR_RP_SCHEMA = copy.deepcopy(PUT_TRAITS_SCHEMA) SET_TRAITS_FOR_RP_SCHEMA['properties']['traits']['items'] = TRAIT SET_TRAITS_FOR_RP_SCHEMA['properties'][ 'resource_provider_generation'] = {'type': 'integer'} SET_TRAITS_FOR_RP_SCHEMA['required'].append('resource_provider_generation') LIST_TRAIT_SCHEMA = { "type": "object", "properties": { "name": { "type": "string" }, "associated": { "type": "string", } }, "additionalProperties": False } ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/schemas/usage.py0000664000175000017500000000270000000000000022705 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for usage information.""" import copy from placement.schemas import common # Represents the allowed query string parameters to GET /usages GET_USAGES_SCHEMA_1_9 = { "type": "object", "properties": { "project_id": { "type": "string", "minLength": 1, "maxLength": 255, }, "user_id": { "type": "string", "minLength": 1, "maxLength": 255, }, }, "required": [ "project_id" ], "additionalProperties": False, } # An optional consumer type was added to the usage dicts in this # version of GET /usages. GET_USAGES_SCHEMA_V1_38 = copy.deepcopy(GET_USAGES_SCHEMA_1_9) GET_USAGES_SCHEMA_V1_38['properties']['consumer_type'] = { "type": "string", "pattern": common.CONSUMER_TYPE_GET_PATTERN, "minLength": 1, "maxLength": 255, } ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2567778 openstack_placement-13.0.0/placement/tests/0000775000175000017500000000000000000000000020747 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/README.rst0000664000175000017500000000427100000000000022442 0ustar00zuulzuul00000000000000========================================== OpenStack Placement Testing Infrastructure ========================================== This README file attempts to provides some brief guidance for writing tests when fixing bugs or adding features to placement. For a lot more information see the `contributor docs`_. Test Types: Unit vs. Functional vs. Integration ----------------------------------------------- Placement tests are divided into three types: * Unit: tests which confirm the behavior of individual pieces of the code (individual methods or classes) with minimal dependency on other code or on externals like the database. * Functional: tests which confirm a chunk of behavior, end to end, such as an HTTP endpoint accepting a body from a request and returning the expected response but without reliance on code or services that are external to placement. * Integration: tests that confirm that things work with other services, such as nova. Placement uses all three, but the majority are functional tests. This is the result of the fairly direct architecture of placement: It is a WSGI application that talks to a database. Writing Unit Tests ------------------ Placement unit tests are based on the ``TestCase`` that comes with the ``testtools`` package. Use mocks only as necessary. If you find that you need multiple mocks to make a test for the code you are testing may benefit from being refactored to smaller units. Writing Functional Tests ------------------------ There are two primary classes of functional test in placement: * Testing database operations. These are based on ``placement.tests.functional.base.TestCase`` which is responsible for starting an in-memory database and a reasonable minimal configuration. * Testing the HTTP API using `gabbi`_. Writing Integration Tests ------------------------- Placement configures its gate and check jobs via the ``.zuul.yaml`` file in the root of the code repository. Some of the entries in that file configure integration jobs, many of which use `tempest`_. .. _gabbi: https://gabbi.readthedocs.io/ .. _contributor docs: https://docs.openstack.org/placement/latest/contributor/ .. _tempest: https://docs.openstack.org/tempest/latest/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/__init__.py0000664000175000017500000000000000000000000023046 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/fixtures.py0000664000175000017500000000611100000000000023171 0ustar00zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fixtures for Placement tests.""" from oslo_config import cfg from oslo_db.sqlalchemy import test_fixtures from placement.db.sqlalchemy import migration from placement import db_api as placement_db from placement import deploy from placement.objects import resource_class from placement.objects import trait class Database(test_fixtures.GeneratesSchema, test_fixtures.AdHocDbFixture): def __init__(self, conf_fixture, set_config=False): """Create a database fixture.""" super(Database, self).__init__() if set_config: try: conf_fixture.register_opt( cfg.StrOpt('connection'), group='placement_database') except cfg.DuplicateOptError: # already registered pass conf_fixture.config(connection='sqlite://', group='placement_database') self.conf_fixture = conf_fixture self.get_engine = placement_db.get_placement_engine placement_db.configure(self.conf_fixture.conf) def get_enginefacade(self): return placement_db.placement_context_manager def generate_schema_create_all(self, engine): # note: at this point in oslo_db's fixtures, the incoming # Engine has **not** been associated with the global # context manager yet. migration.create_schema(engine) # so, to work around that placement's setup code really wants to # use the enginefacade, we will patch the engine into it early. # oslo_db is going to patch it anyway later. So the bug in oslo.db # is that code these days really wants the facade to be set up fully # when it's time to create the database. When oslo_db's fixtures # were written, enginefacade was not in use yet so it was not # anticipated that everyone would be doing things this way _reset_facade = placement_db.placement_context_manager.patch_engine( engine) self.addCleanup(_reset_facade) # Make sure db flags are correct at both the start and finish # of the test. self.addCleanup(self.cleanup) self.cleanup() # Sync traits and resource classes. deploy.update_database(self.conf_fixture.conf) def cleanup(self): trait._TRAITS_SYNCED = False resource_class._RESOURCE_CLASSES_SYNCED = False ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2567778 openstack_placement-13.0.0/placement/tests/functional/0000775000175000017500000000000000000000000023111 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/__init__.py0000664000175000017500000000000000000000000025210 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/base.py0000664000175000017500000000434400000000000024402 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_log.fixture import logging_error from oslotest import output import testtools from placement import conf from placement import context from placement.tests import fixtures from placement.tests.functional.fixtures import capture from placement.tests.unit import policy_fixture class TestCase(testtools.TestCase): """A base test case for placement functional tests. Sets up minimum configuration for database and policy handling and establishes the placement database. """ USES_DB = True def setUp(self): super(TestCase, self).setUp() # Manage required configuration self.conf_fixture = self.useFixture( config_fixture.Config(cfg.ConfigOpts())) conf.register_opts(self.conf_fixture.conf) if self.USES_DB: self.placement_db = self.useFixture(fixtures.Database( self.conf_fixture, set_config=True)) else: self.conf_fixture.config( connection='sqlite://', group='placement_database', ) self.conf_fixture.conf([], default_config_files=[]) self.useFixture(policy_fixture.PolicyFixture(self.conf_fixture)) self.useFixture(capture.Logging()) self.useFixture(output.CaptureOutput()) # Filter ignorable warnings during test runs. self.useFixture(capture.WarningsFixture()) self.useFixture(logging_error.get_logging_handle_error_fixture()) self.context = context.RequestContext() self.context.config = self.conf_fixture.conf class NoDBTestCase(TestCase): USES_DB = False ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2567778 openstack_placement-13.0.0/placement/tests/functional/cmd/0000775000175000017500000000000000000000000023654 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/cmd/__init__.py0000664000175000017500000000000000000000000025753 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/cmd/test_status.py0000664000175000017500000000755700000000000026626 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import io import fixtures from oslo_config import cfg from oslo_upgradecheck import upgradecheck from oslo_utils.fixture import uuidsentinel from placement.cmd import status from placement import conf from placement import db_api from placement.objects import consumer from placement.objects import resource_provider from placement.tests.functional import base from placement.tests.functional.db import test_consumer class UpgradeCheckIncompleteConsumersTestCase( base.TestCase, test_consumer.CreateIncompleteAllocationsMixin, ): """Tests the "Incomplete Consumers" check for the "placement-status upgrade check" command. """ def setUp(self): super(UpgradeCheckIncompleteConsumersTestCase, self).setUp() self.output = io.StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) config = cfg.ConfigOpts() conf.register_opts(config) config(args=[], project='placement') self.checks = status.Checks(config) def test_check_incomplete_consumers(self): # Create some allocations with 3 missing consumers. self._create_incomplete_allocations( self.context, num_of_consumer_allocs=2) result = self.checks._check_incomplete_consumers() # Since there are incomplete consumers, there should be a warning. self.assertEqual(upgradecheck.Code.WARNING, result.code) # Check the details for the consumer count. self.assertIn('There are 3 incomplete consumers table records for ' 'existing allocations', result.details) # Run the online data migration (as recommended from the check output). consumer.create_incomplete_consumers(self.context, batch_size=50) # Run the check again and it should be successful. result = self.checks._check_incomplete_consumers() self.assertEqual(upgradecheck.Code.SUCCESS, result.code) def test_check_root_provider_ids(self): @db_api.placement_context_manager.writer def _create_old_rp(ctx): rp_tbl = resource_provider._RP_TBL ins_stmt1 = rp_tbl.insert().values( id=1, uuid=uuidsentinel.rp1, name='rp-1', root_provider_id=None, parent_provider_id=None, generation=42, ) ctx.session.execute(ins_stmt1) # Create a resource provider with no root provider id. _create_old_rp(self.context) result = self.checks._check_root_provider_ids() # Since there is a missing root id, there should be a failure. self.assertEqual(upgradecheck.Code.FAILURE, result.code) # Check the details for the consumer count. self.assertIn('There is at least one resource provider table record ' 'which misses its root provider id. ', result.details) # Run the online data migration as recommended from the check output. resource_provider.set_root_provider_ids(self.context, batch_size=50) # Run the check again and it should be successful. result = self.checks._check_root_provider_ids() self.assertEqual(upgradecheck.Code.SUCCESS, result.code) def test_all_registered_check_is_runnable(self): self.assertEqual(upgradecheck.Code.SUCCESS, self.checks.check()) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.260778 openstack_placement-13.0.0/placement/tests/functional/db/0000775000175000017500000000000000000000000023476 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/__init__.py0000664000175000017500000000000000000000000025575 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_allocation.py0000664000175000017500000006641100000000000027244 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import os_resource_classes as orc from oslo_utils.fixture import uuidsentinel from placement import exception from placement.objects import allocation as alloc_obj from placement.objects import consumer as consumer_obj from placement.objects import consumer_type as ct_obj from placement.objects import inventory as inv_obj from placement.objects import usage as usage_obj from placement.tests.functional.db import test_base as tb class TestAllocation(tb.PlacementDbBaseTestCase): def test_create_list_and_delete_allocation(self): rp, _ = self._make_allocation(tb.DISK_INVENTORY, tb.DISK_ALLOCATION) allocations = alloc_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(1, len(allocations)) self.assertEqual(tb.DISK_ALLOCATION['used'], allocations[0].used) alloc_obj.delete_all(self.ctx, allocations) allocations = alloc_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(0, len(allocations)) def test_delete_all_with_multiple_consumers(self): """Tests fix for LP #1781430 where alloc_obj.delete_all() when issued for a list of allocations returned by alloc_obj.get_by_resource_provider() where the resource provider had multiple consumers allocated against it, left the DB in an inconsistent state. """ # Create a single resource provider and allocate resources for two # instances from it. Then grab all the provider's allocations with # alloc_obj.get_all_by_resource_provider() and attempt to delete # them all with alloc_obj.delete_all(). After which, another call # to alloc_obj.get_all_by_resource_provider() should return an # empty list. cn1 = self._create_provider('cn1') tb.add_inventory(cn1, 'VCPU', 8) c1_uuid = uuidsentinel.consumer1 c2_uuid = uuidsentinel.consumer2 for c_uuid in (c1_uuid, c2_uuid): self.allocate_from_provider(cn1, 'VCPU', 1, consumer_id=c_uuid) allocs = alloc_obj.get_all_by_resource_provider(self.ctx, cn1) self.assertEqual(2, len(allocs)) alloc_obj.delete_all(self.ctx, allocs) allocs = alloc_obj.get_all_by_resource_provider(self.ctx, cn1) self.assertEqual(0, len(allocs)) def test_multi_provider_allocation(self): """Tests that an allocation that includes more than one resource provider can be created, listed and deleted properly. Bug #1707669 highlighted a situation that arose when attempting to remove part of an allocation for a source host during a resize operation where the exiting allocation was not being properly deleted. """ cn_source = self._create_provider('cn_source') cn_dest = self._create_provider('cn_dest') # Add same inventory to both source and destination host for cn in (cn_source, cn_dest): tb.add_inventory(cn, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(cn, orc.MEMORY_MB, 1024, min_unit=64, max_unit=1024, step_size=64, allocation_ratio=1.5) # Create an INSTANCE consumer type ct = ct_obj.ConsumerType(self.ctx, name='INSTANCE') ct.create() # Save consumer type id for later confirmation. ct_id = ct.id # Create a consumer representing the instance inst_consumer = consumer_obj.Consumer( self.ctx, uuid=uuidsentinel.instance, user=self.user_obj, project=self.project_obj, consumer_type_id=ct_id) inst_consumer.create() # Now create an allocation that represents a move operation where the # scheduler has selected cn_dest as the target host and created a # "doubled-up" allocation for the duration of the move operation alloc_list = [ alloc_obj.Allocation( consumer=inst_consumer, resource_provider=cn_source, resource_class=orc.VCPU, used=1), alloc_obj.Allocation( consumer=inst_consumer, resource_provider=cn_source, resource_class=orc.MEMORY_MB, used=256), alloc_obj.Allocation( consumer=inst_consumer, resource_provider=cn_dest, resource_class=orc.VCPU, used=1), alloc_obj.Allocation( consumer=inst_consumer, resource_provider=cn_dest, resource_class=orc.MEMORY_MB, used=256), ] alloc_obj.replace_all(self.ctx, alloc_list) src_allocs = alloc_obj.get_all_by_resource_provider( self.ctx, cn_source) self.assertEqual(2, len(src_allocs)) dest_allocs = alloc_obj.get_all_by_resource_provider(self.ctx, cn_dest) self.assertEqual(2, len(dest_allocs)) consumer_allocs = alloc_obj.get_all_by_consumer_id( self.ctx, uuidsentinel.instance) self.assertEqual(4, len(consumer_allocs)) # Validate that when we create an allocation for a consumer that we # delete any existing allocation and replace it with what the new. # Here, we're emulating the step that occurs on confirm_resize() where # the source host pulls the existing allocation for the instance and # removes any resources that refer to itself and saves the allocation # back to placement new_alloc_list = [ alloc_obj.Allocation( consumer=inst_consumer, resource_provider=cn_dest, resource_class=orc.VCPU, used=1), alloc_obj.Allocation( consumer=inst_consumer, resource_provider=cn_dest, resource_class=orc.MEMORY_MB, used=256), ] alloc_obj.replace_all(self.ctx, new_alloc_list) src_allocs = alloc_obj.get_all_by_resource_provider( self.ctx, cn_source) self.assertEqual(0, len(src_allocs)) dest_allocs = alloc_obj.get_all_by_resource_provider( self.ctx, cn_dest) self.assertEqual(2, len(dest_allocs)) consumer_allocs = alloc_obj.get_all_by_consumer_id( self.ctx, uuidsentinel.instance) self.assertEqual(2, len(consumer_allocs)) # check the allocations have the expected INSTANCE consumer type self.assertEqual(ct_id, consumer_allocs[0].consumer.consumer_type_id) self.assertEqual(ct_id, consumer_allocs[1].consumer.consumer_type_id) def test_get_all_by_resource_provider(self): rp, allocation = self._make_allocation(tb.DISK_INVENTORY, tb.DISK_ALLOCATION) allocations = alloc_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(1, len(allocations)) self.assertEqual(rp.id, allocations[0].resource_provider.id) self.assertEqual(allocation.resource_provider.id, allocations[0].resource_provider.id) class TestAllocationListCreateDelete(tb.PlacementDbBaseTestCase): def test_allocation_checking(self): """Test that allocation check logic works with 2 resource classes on one provider. If this fails, we get a KeyError at replace_all() """ max_unit = 10 consumer_uuid = uuidsentinel.consumer consumer_uuid2 = uuidsentinel.consumer2 # Create a consumer representing the two instances consumer = consumer_obj.Consumer( self.ctx, uuid=consumer_uuid, user=self.user_obj, project=self.project_obj) consumer.create() consumer2 = consumer_obj.Consumer( self.ctx, uuid=consumer_uuid2, user=self.user_obj, project=self.project_obj) consumer2.create() # Create one resource provider with 2 classes rp1_name = uuidsentinel.rp1_name rp1_uuid = uuidsentinel.rp1_uuid rp1_class = orc.DISK_GB rp1_used = 6 rp2_class = orc.IPV4_ADDRESS rp2_used = 2 rp1 = self._create_provider(rp1_name, uuid=rp1_uuid) tb.add_inventory(rp1, rp1_class, 1024, max_unit=max_unit) tb.add_inventory(rp1, rp2_class, 255, reserved=2, max_unit=max_unit) # create the allocations for a first consumer allocation_1 = alloc_obj.Allocation( resource_provider=rp1, consumer=consumer, resource_class=rp1_class, used=rp1_used) allocation_2 = alloc_obj.Allocation( resource_provider=rp1, consumer=consumer, resource_class=rp2_class, used=rp2_used) allocation_list = [allocation_1, allocation_2] alloc_obj.replace_all(self.ctx, allocation_list) # create the allocations for a second consumer, until we have # allocations for more than one consumer in the db, then we # won't actually be doing real allocation math, which triggers # the sql monster. allocation_1 = alloc_obj.Allocation( resource_provider=rp1, consumer=consumer2, resource_class=rp1_class, used=rp1_used) allocation_2 = alloc_obj.Allocation( resource_provider=rp1, consumer=consumer2, resource_class=rp2_class, used=rp2_used) allocation_list = [allocation_1, allocation_2] # If we are joining wrong, this will be a KeyError alloc_obj.replace_all(self.ctx, allocation_list) def test_allocation_list_create(self): max_unit = 10 consumer_uuid = uuidsentinel.consumer # Create a consumer representing the instance inst_consumer = consumer_obj.Consumer( self.ctx, uuid=consumer_uuid, user=self.user_obj, project=self.project_obj) inst_consumer.create() # Create two resource providers rp1_name = uuidsentinel.rp1_name rp1_uuid = uuidsentinel.rp1_uuid rp1_class = orc.DISK_GB rp1_used = 6 rp2_name = uuidsentinel.rp2_name rp2_uuid = uuidsentinel.rp2_uuid rp2_class = orc.IPV4_ADDRESS rp2_used = 2 rp1 = self._create_provider(rp1_name, uuid=rp1_uuid) rp2 = self._create_provider(rp2_name, uuid=rp2_uuid) # Two allocations, one for each resource provider. allocation_1 = alloc_obj.Allocation( resource_provider=rp1, consumer=inst_consumer, resource_class=rp1_class, used=rp1_used) allocation_2 = alloc_obj.Allocation( resource_provider=rp2, consumer=inst_consumer, resource_class=rp2_class, used=rp2_used) allocation_list = [allocation_1, allocation_2] # There's no inventory, we have a failure. error = self.assertRaises(exception.InvalidInventory, alloc_obj.replace_all, self.ctx, allocation_list) # Confirm that the resource class string, not index, is in # the exception and resource providers are listed by uuid. self.assertIn(rp1_class, str(error)) self.assertIn(rp2_class, str(error)) self.assertIn(rp1.uuid, str(error)) self.assertIn(rp2.uuid, str(error)) # Add inventory for one of the two resource providers. This should also # fail, since rp2 has no inventory. tb.add_inventory(rp1, rp1_class, 1024, max_unit=1) self.assertRaises(exception.InvalidInventory, alloc_obj.replace_all, self.ctx, allocation_list) # Add inventory for the second resource provider tb.add_inventory(rp2, rp2_class, 255, reserved=2, max_unit=1) # Now the allocations will still fail because max_unit 1 self.assertRaises(exception.InvalidAllocationConstraintsViolated, alloc_obj.replace_all, self.ctx, allocation_list) inv1 = inv_obj.Inventory(resource_provider=rp1, resource_class=rp1_class, total=1024, max_unit=max_unit) rp1.set_inventory([inv1]) inv2 = inv_obj.Inventory(resource_provider=rp2, resource_class=rp2_class, total=255, reserved=2, max_unit=max_unit) rp2.set_inventory([inv2]) # Now we can finally allocate. alloc_obj.replace_all(self.ctx, allocation_list) # Check that those allocations changed usage on each # resource provider. rp1_usage = usage_obj.get_all_by_resource_provider_uuid( self.ctx, rp1_uuid) rp2_usage = usage_obj.get_all_by_resource_provider_uuid( self.ctx, rp2_uuid) self.assertEqual(rp1_used, rp1_usage[0].usage) self.assertEqual(rp2_used, rp2_usage[0].usage) # redo one allocation # TODO(cdent): This does not currently behave as expected # because a new allocation is created, adding to the total # used, not replacing. rp1_used += 1 self.allocate_from_provider( rp1, rp1_class, rp1_used, consumer=inst_consumer) rp1_usage = usage_obj.get_all_by_resource_provider_uuid( self.ctx, rp1_uuid) self.assertEqual(rp1_used, rp1_usage[0].usage) # delete the allocations for the consumer # NOTE(cdent): The database uses 'consumer_id' for the # column, presumably because some ids might not be uuids, at # some point in the future. consumer_allocations = alloc_obj.get_all_by_consumer_id( self.ctx, consumer_uuid) alloc_obj.delete_all(self.ctx, consumer_allocations) rp1_usage = usage_obj.get_all_by_resource_provider_uuid( self.ctx, rp1_uuid) rp2_usage = usage_obj.get_all_by_resource_provider_uuid( self.ctx, rp2_uuid) self.assertEqual(0, rp1_usage[0].usage) self.assertEqual(0, rp2_usage[0].usage) def _make_rp_and_inventory(self, **kwargs): # Create one resource provider and set some inventory rp_name = kwargs.get('rp_name') or uuidsentinel.rp_name rp_uuid = kwargs.get('rp_uuid') or uuidsentinel.rp_uuid rp = self._create_provider(rp_name, uuid=rp_uuid) rc = kwargs.pop('resource_class') tb.add_inventory(rp, rc, 1024, **kwargs) return rp def _validate_usage(self, rp, usage): rp_usage = usage_obj.get_all_by_resource_provider_uuid( self.ctx, rp.uuid) self.assertEqual(usage, rp_usage[0].usage) def _check_create_allocations(self, inventory_kwargs, bad_used, good_used): rp_class = orc.DISK_GB rp = self._make_rp_and_inventory(resource_class=rp_class, **inventory_kwargs) # allocation, bad step_size self.assertRaises(exception.InvalidAllocationConstraintsViolated, self.allocate_from_provider, rp, rp_class, bad_used) # correct for step size self.allocate_from_provider(rp, rp_class, good_used) # check usage self._validate_usage(rp, good_used) def test_create_all_step_size(self): bad_used = 4 good_used = 5 inventory_kwargs = {'max_unit': 9999, 'step_size': 5} self._check_create_allocations(inventory_kwargs, bad_used, good_used) def test_create_all_min_unit(self): bad_used = 4 good_used = 5 inventory_kwargs = {'max_unit': 9999, 'min_unit': 5} self._check_create_allocations(inventory_kwargs, bad_used, good_used) def test_create_all_max_unit(self): bad_used = 5 good_used = 3 inventory_kwargs = {'max_unit': 3} self._check_create_allocations(inventory_kwargs, bad_used, good_used) def test_create_and_clear(self): """Test that a used of 0 in an allocation wipes allocations.""" consumer_uuid = uuidsentinel.consumer # Create a consumer representing the instance inst_consumer = consumer_obj.Consumer( self.ctx, uuid=consumer_uuid, user=self.user_obj, project=self.project_obj) inst_consumer.create() rp_class = orc.DISK_GB target_rp = self._make_rp_and_inventory(resource_class=rp_class, max_unit=500) # Create two allocations with values and confirm the resulting # usage is as expected. allocation1 = alloc_obj.Allocation( resource_provider=target_rp, consumer=inst_consumer, resource_class=rp_class, used=100) allocation2 = alloc_obj.Allocation( resource_provider=target_rp, consumer=inst_consumer, resource_class=rp_class, used=200) allocation_list = [allocation1, allocation2] alloc_obj.replace_all(self.ctx, allocation_list) allocations = alloc_obj.get_all_by_consumer_id(self.ctx, consumer_uuid) self.assertEqual(2, len(allocations)) usage = sum(alloc.used for alloc in allocations) self.assertEqual(300, usage) # Create two allocations, one with 0 used, to confirm the # resulting usage is only of one. allocation1 = alloc_obj.Allocation( resource_provider=target_rp, consumer=inst_consumer, resource_class=rp_class, used=0) allocation2 = alloc_obj.Allocation( resource_provider=target_rp, consumer=inst_consumer, resource_class=rp_class, used=200) allocation_list = [allocation1, allocation2] alloc_obj.replace_all(self.ctx, allocation_list) allocations = alloc_obj.get_all_by_consumer_id(self.ctx, consumer_uuid) self.assertEqual(1, len(allocations)) usage = allocations[0].used self.assertEqual(200, usage) # add a source rp and a migration consumer migration_uuid = uuidsentinel.migration # Create a consumer representing the migration mig_consumer = consumer_obj.Consumer( self.ctx, uuid=migration_uuid, user=self.user_obj, project=self.project_obj) mig_consumer.create() source_rp = self._make_rp_and_inventory( rp_name=uuidsentinel.source_name, rp_uuid=uuidsentinel.source_uuid, resource_class=rp_class, max_unit=500) # Create two allocations, one as the consumer, one as the # migration. allocation1 = alloc_obj.Allocation( resource_provider=target_rp, consumer=inst_consumer, resource_class=rp_class, used=200) allocation2 = alloc_obj.Allocation( resource_provider=source_rp, consumer=mig_consumer, resource_class=rp_class, used=200) allocation_list = [allocation1, allocation2] alloc_obj.replace_all(self.ctx, allocation_list) # Check primary consumer allocations. allocations = alloc_obj.get_all_by_consumer_id(self.ctx, consumer_uuid) self.assertEqual(1, len(allocations)) usage = allocations[0].used self.assertEqual(200, usage) # Check migration allocations. allocations = alloc_obj.get_all_by_consumer_id( self.ctx, migration_uuid) self.assertEqual(1, len(allocations)) usage = allocations[0].used self.assertEqual(200, usage) # Clear the migration and confirm the target. allocation1 = alloc_obj.Allocation( resource_provider=target_rp, consumer=inst_consumer, resource_class=rp_class, used=200) allocation2 = alloc_obj.Allocation( resource_provider=source_rp, consumer=mig_consumer, resource_class=rp_class, used=0) allocation_list = [allocation1, allocation2] alloc_obj.replace_all(self.ctx, allocation_list) allocations = alloc_obj.get_all_by_consumer_id(self.ctx, consumer_uuid) self.assertEqual(1, len(allocations)) usage = allocations[0].used self.assertEqual(200, usage) allocations = alloc_obj.get_all_by_consumer_id( self.ctx, migration_uuid) self.assertEqual(0, len(allocations)) def test_create_exceeding_capacity_allocation(self): """Tests on a list of allocations which contains an invalid allocation exceeds resource provider's capacity. Expect InvalidAllocationCapacityExceeded to be raised and all allocations in the list should not be applied. """ empty_rp = self._create_provider('empty_rp') full_rp = self._create_provider('full_rp') for rp in (empty_rp, full_rp): tb.add_inventory(rp, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(rp, orc.MEMORY_MB, 1024, min_unit=64, max_unit=1024, step_size=64) # Create a consumer representing the instance inst_consumer = consumer_obj.Consumer( self.ctx, uuid=uuidsentinel.instance, user=self.user_obj, project=self.project_obj) inst_consumer.create() # First create a allocation to consume full_rp's resource. alloc_list = [ alloc_obj.Allocation( consumer=inst_consumer, resource_provider=full_rp, resource_class=orc.VCPU, used=12), alloc_obj.Allocation( consumer=inst_consumer, resource_provider=full_rp, resource_class=orc.MEMORY_MB, used=1024) ] alloc_obj.replace_all(self.ctx, alloc_list) # Create a consumer representing the second instance inst2_consumer = consumer_obj.Consumer( self.ctx, uuid=uuidsentinel.instance2, user=self.user_obj, project=self.project_obj) inst2_consumer.create() # Create an allocation list consisting of valid requests and an invalid # request exceeding the memory full_rp can provide. alloc_list = [ alloc_obj.Allocation( consumer=inst2_consumer, resource_provider=empty_rp, resource_class=orc.VCPU, used=12), alloc_obj.Allocation( consumer=inst2_consumer, resource_provider=empty_rp, resource_class=orc.MEMORY_MB, used=512), alloc_obj.Allocation( consumer=inst2_consumer, resource_provider=full_rp, resource_class=orc.VCPU, used=12), alloc_obj.Allocation( consumer=inst2_consumer, resource_provider=full_rp, resource_class=orc.MEMORY_MB, used=512), ] self.assertRaises(exception.InvalidAllocationCapacityExceeded, alloc_obj.replace_all, self.ctx, alloc_list) # Make sure that allocations of both empty_rp and full_rp remain # unchanged. allocations = alloc_obj.get_all_by_resource_provider(self.ctx, full_rp) self.assertEqual(2, len(allocations)) allocations = alloc_obj.get_all_by_resource_provider( self.ctx, empty_rp) self.assertEqual(0, len(allocations)) @mock.patch('placement.objects.allocation.LOG') def test_set_allocations_retry(self, mock_log): """Test server side allocation write retry handling.""" # Create a single resource provider and give it some inventory. rp1 = self._create_provider('rp1') tb.add_inventory(rp1, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(rp1, orc.MEMORY_MB, 1024, min_unit=64, max_unit=1024, step_size=64) original_generation = rp1.generation # Verify the generation is what we expect (we'll be checking again # later). self.assertEqual(2, original_generation) # Create a consumer and have it make an allocation. inst_consumer = consumer_obj.Consumer( self.ctx, uuid=uuidsentinel.instance, user=self.user_obj, project=self.project_obj) inst_consumer.create() alloc_list = [ alloc_obj.Allocation( consumer=inst_consumer, resource_provider=rp1, resource_class=orc.VCPU, used=12), alloc_obj.Allocation( consumer=inst_consumer, resource_provider=rp1, resource_class=orc.MEMORY_MB, used=1024) ] # Make sure the right exception happens when the retry loop expires. self.conf_fixture.config(allocation_conflict_retry_count=0, group='placement') self.assertRaises( exception.ResourceProviderConcurrentUpdateDetected, alloc_obj.replace_all, self.ctx, alloc_list) mock_log.warning.assert_called_with( 'Exceeded retry limit of %d on allocations write', 0) # Make sure the right thing happens after a small number of failures. # There's a bit of mock magic going on here to ensure that we can # both do some side effects on _set_allocations as well as have the # real behavior. Two generation conflicts and then a success. mock_log.reset_mock() self.conf_fixture.config(allocation_conflict_retry_count=3, group='placement') unmocked_set = alloc_obj._set_allocations with mock.patch('placement.objects.allocation.' '_set_allocations') as mock_set: exceptions = iter([ exception.ResourceProviderConcurrentUpdateDetected(), exception.ResourceProviderConcurrentUpdateDetected(), ]) def side_effect(*args, **kwargs): try: raise next(exceptions) except StopIteration: return unmocked_set(*args, **kwargs) mock_set.side_effect = side_effect alloc_obj.replace_all(self.ctx, alloc_list) self.assertEqual(2, mock_log.debug.call_count) mock_log.debug.assert_has_calls( [mock.call('Retrying allocations write on resource provider ' 'generation conflict')] * 2) self.assertEqual(3, mock_set.call_count) # Confirm we're using a different rp object after the change # and that it has a higher generation. new_rp = alloc_list[0].resource_provider self.assertEqual(original_generation, rp1.generation) self.assertEqual(original_generation + 1, new_rp.generation) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_allocation_candidates.py0000664000175000017500000045154600000000000031432 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import os_resource_classes as orc import os_traits from oslo_utils.fixture import uuidsentinel as uuids import sqlalchemy as sa from placement import exception from placement import lib as placement_lib from placement.objects import allocation_candidate as ac_obj from placement.objects import research_context as res_ctx from placement.objects import resource_class as rc_obj from placement.objects import resource_provider as rp_obj from placement.objects import trait as trait_obj from placement.tests.functional.db import test_base as tb def _req_group_search_context(context, **kwargs): resources = { orc.VCPU: 2, orc.MEMORY_MB: 256, orc.SRIOV_NET_VF: 1, } request = placement_lib.RequestGroup( use_same_provider=False, resources=kwargs.get('resources', resources), required_traits=kwargs.get('required_traits', []), forbidden_traits=kwargs.get('forbidden_traits', set()), member_of=kwargs.get('member_of', []), forbidden_aggs=kwargs.get('forbidden_aggs', []), in_tree=kwargs.get('in_tree', None), ) has_trees = res_ctx._has_provider_trees(context) sharing = res_ctx.get_sharing_providers(context) rg_ctx = res_ctx.RequestGroupSearchContext( context, request, has_trees, sharing) return rg_ctx class ProviderDBHelperTestCase(tb.PlacementDbBaseTestCase): def test_get_provider_ids_matching(self): # These RPs are named based on whether we expect them to be 'incl'uded # or 'excl'uded in the result. # No inventory records. This one should never show up in a result. self._create_provider('no_inventory') # Inventory of adequate CPU and memory, no allocations against it. excl_big_cm_noalloc = self._create_provider('big_cm_noalloc') tb.add_inventory(excl_big_cm_noalloc, orc.VCPU, 15) tb.add_inventory(excl_big_cm_noalloc, orc.MEMORY_MB, 4096, max_unit=2048) # Inventory of adequate memory and disk, no allocations against it. excl_big_md_noalloc = self._create_provider('big_md_noalloc') tb.add_inventory(excl_big_md_noalloc, orc.MEMORY_MB, 4096, max_unit=2048) tb.add_inventory(excl_big_md_noalloc, orc.DISK_GB, 2000) # Adequate inventory, no allocations against it. incl_biginv_noalloc = self._create_provider('biginv_noalloc') tb.add_inventory(incl_biginv_noalloc, orc.VCPU, 15) tb.add_inventory(incl_biginv_noalloc, orc.MEMORY_MB, 4096, max_unit=2048) tb.add_inventory(incl_biginv_noalloc, orc.DISK_GB, 2000) # No allocations, but inventory unusable. Try to hit all the possible # reasons for exclusion. # VCPU min_unit too high excl_badinv_min_unit = self._create_provider('badinv_min_unit') tb.add_inventory(excl_badinv_min_unit, orc.VCPU, 12, min_unit=6) tb.add_inventory(excl_badinv_min_unit, orc.MEMORY_MB, 4096, max_unit=2048) tb.add_inventory(excl_badinv_min_unit, orc.DISK_GB, 2000) # MEMORY_MB max_unit too low excl_badinv_max_unit = self._create_provider('badinv_max_unit') tb.add_inventory(excl_badinv_max_unit, orc.VCPU, 15) tb.add_inventory(excl_badinv_max_unit, orc.MEMORY_MB, 4096, max_unit=512) tb.add_inventory(excl_badinv_max_unit, orc.DISK_GB, 2000) # DISK_GB unsuitable step_size excl_badinv_step_size = self._create_provider('badinv_step_size') tb.add_inventory(excl_badinv_step_size, orc.VCPU, 15) tb.add_inventory(excl_badinv_step_size, orc.MEMORY_MB, 4096, max_unit=2048) tb.add_inventory(excl_badinv_step_size, orc.DISK_GB, 2000, step_size=7) # Not enough total VCPU excl_badinv_total = self._create_provider('badinv_total') tb.add_inventory(excl_badinv_total, orc.VCPU, 4) tb.add_inventory(excl_badinv_total, orc.MEMORY_MB, 4096, max_unit=2048) tb.add_inventory(excl_badinv_total, orc.DISK_GB, 2000) # Too much reserved MEMORY_MB excl_badinv_reserved = self._create_provider('badinv_reserved') tb.add_inventory(excl_badinv_reserved, orc.VCPU, 15) tb.add_inventory(excl_badinv_reserved, orc.MEMORY_MB, 4096, max_unit=2048, reserved=3500) tb.add_inventory(excl_badinv_reserved, orc.DISK_GB, 2000) # DISK_GB allocation ratio blows it up excl_badinv_alloc_ratio = self._create_provider('badinv_alloc_ratio') tb.add_inventory(excl_badinv_alloc_ratio, orc.VCPU, 15) tb.add_inventory(excl_badinv_alloc_ratio, orc.MEMORY_MB, 4096, max_unit=2048) tb.add_inventory(excl_badinv_alloc_ratio, orc.DISK_GB, 2000, allocation_ratio=0.5) # Inventory consumed in one RC, but available in the others excl_1invunavail = self._create_provider('1invunavail') tb.add_inventory(excl_1invunavail, orc.VCPU, 10) self.allocate_from_provider(excl_1invunavail, orc.VCPU, 7) tb.add_inventory(excl_1invunavail, orc.MEMORY_MB, 4096) self.allocate_from_provider(excl_1invunavail, orc.MEMORY_MB, 1024) tb.add_inventory(excl_1invunavail, orc.DISK_GB, 2000) self.allocate_from_provider(excl_1invunavail, orc.DISK_GB, 400) # Inventory all consumed excl_allused = self._create_provider('allused') tb.add_inventory(excl_allused, orc.VCPU, 10) self.allocate_from_provider(excl_allused, orc.VCPU, 7) tb.add_inventory(excl_allused, orc.MEMORY_MB, 4000) self.allocate_from_provider(excl_allused, orc.MEMORY_MB, 1500) self.allocate_from_provider(excl_allused, orc.MEMORY_MB, 2000) tb.add_inventory(excl_allused, orc.DISK_GB, 1500) self.allocate_from_provider(excl_allused, orc.DISK_GB, 1) # Inventory available in requested classes, but unavailable in others incl_extra_full = self._create_provider('extra_full') tb.add_inventory(incl_extra_full, orc.VCPU, 20) self.allocate_from_provider(incl_extra_full, orc.VCPU, 15) tb.add_inventory(incl_extra_full, orc.MEMORY_MB, 4096) self.allocate_from_provider(incl_extra_full, orc.MEMORY_MB, 1024) tb.add_inventory(incl_extra_full, orc.DISK_GB, 2000) self.allocate_from_provider(incl_extra_full, orc.DISK_GB, 400) tb.add_inventory(incl_extra_full, orc.PCI_DEVICE, 4) self.allocate_from_provider(incl_extra_full, orc.PCI_DEVICE, 1) self.allocate_from_provider(incl_extra_full, orc.PCI_DEVICE, 3) # Inventory available in a unrequested classes, not in requested ones excl_extra_avail = self._create_provider('extra_avail') # Incompatible step size tb.add_inventory(excl_extra_avail, orc.VCPU, 10, step_size=3) # Not enough left after reserved + used tb.add_inventory(excl_extra_avail, orc.MEMORY_MB, 4096, max_unit=2048, reserved=2048) self.allocate_from_provider(excl_extra_avail, orc.MEMORY_MB, 1040) # Allocation ratio math tb.add_inventory(excl_extra_avail, orc.DISK_GB, 2000, allocation_ratio=0.5) tb.add_inventory(excl_extra_avail, orc.IPV4_ADDRESS, 48) custom_special = rc_obj.ResourceClass(self.ctx, name='CUSTOM_SPECIAL') custom_special.create() tb.add_inventory(excl_extra_avail, 'CUSTOM_SPECIAL', 100) self.allocate_from_provider(excl_extra_avail, 'CUSTOM_SPECIAL', 99) resources = { orc.VCPU: 5, orc.MEMORY_MB: 1024, orc.DISK_GB: 1500 } # Run it! rg_ctx = _req_group_search_context(self.ctx, resources=resources) res = res_ctx.get_provider_ids_matching(rg_ctx) # We should get all the incl_* RPs expected = [incl_biginv_noalloc, incl_extra_full] self.assertEqual(set((rp.id, rp.id) for rp in expected), set(res)) # Now request that the providers must have a set of required traits and # that this results in no results returned, since we haven't yet # associated any traits with the providers avx2_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.HW_CPU_X86_AVX2) req_traits = [{os_traits.HW_CPU_X86_AVX2}] rg_ctx = _req_group_search_context( self.ctx, resources=resources, required_traits=req_traits, ) res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual([], res) # Next let's set the required trait to an excl_* RPs. # This should result in no results returned as well. excl_big_md_noalloc.set_traits([avx2_t]) res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual([], res) # OK, now add the trait to one of the incl_* providers and verify that # provider now shows up in our results incl_biginv_noalloc.set_traits([avx2_t]) res = res_ctx.get_provider_ids_matching(rg_ctx) rp_ids = [r[0] for r in res] self.assertEqual([incl_biginv_noalloc.id], rp_ids) # ask for a complex required trait query: (AVX2 and (SEE or SSE2)) # first it should match no RPs as neither has SSE nor SSE2 req_traits = [ {os_traits.HW_CPU_X86_AVX2}, {os_traits.HW_CPU_X86_SSE, os_traits.HW_CPU_X86_SSE2} ] rg_ctx = _req_group_search_context( self.ctx, resources=resources, required_traits=req_traits, ) res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual([], res) # now add SSE to an RP that has no AVX2 so we still not have a match sse_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.HW_CPU_X86_SSE) incl_extra_full.set_traits([sse_t]) res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual([], res) # now add SSE2 to an RP which also has AVX2. We expect that RP is a # match sse2_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.HW_CPU_X86_SSE2) incl_biginv_noalloc.set_traits([avx2_t, sse2_t]) res = res_ctx.get_provider_ids_matching(rg_ctx) rp_ids = [r[0] for r in res] self.assertEqual([incl_biginv_noalloc.id], rp_ids) # Let's see if the in_tree filter works rg_ctx = _req_group_search_context( self.ctx, resources=resources, in_tree=uuids.biginv_noalloc, ) res = res_ctx.get_provider_ids_matching(rg_ctx) rp_ids = [r[0] for r in res] self.assertEqual([incl_biginv_noalloc.id], rp_ids) # We don't get anything if the specified tree doesn't satisfy the # requirements in the first place self.assertRaises(exception.ResourceProviderNotFound, _req_group_search_context, self.ctx, resources=resources, in_tree=uuids.allused) def test_get_provider_ids_matching_with_multiple_forbidden(self): rp1 = self._create_provider('rp1', uuids.agg1) tb.add_inventory(rp1, orc.VCPU, 64) rp2 = self._create_provider('rp2', uuids.agg1) trait_two, = tb.set_traits(rp2, 'CUSTOM_TWO') tb.add_inventory(rp2, orc.VCPU, 64) rp3 = self._create_provider('rp3') trait_three, = tb.set_traits(rp3, 'CUSTOM_THREE') tb.add_inventory(rp3, orc.VCPU, 64) resources = {orc.VCPU: 4} forbidden_traits = {trait_two.name, trait_three.name} member_of = [[uuids.agg1]] rg_ctx = _req_group_search_context( self.ctx, resources=resources, forbidden_traits=forbidden_traits, member_of=member_of) res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual({(rp1.id, rp1.id)}, set(res)) def test_get_provider_ids_matching_with_aggregates(self): rp1 = self._create_provider('rp1', uuids.agg1, uuids.agg2) rp2 = self._create_provider('rp2', uuids.agg2, uuids.agg3) rp3 = self._create_provider('rp3', uuids.agg3, uuids.agg4) rp4 = self._create_provider('rp4', uuids.agg4, uuids.agg1) rp5 = self._create_provider('rp5') tb.add_inventory(rp1, orc.VCPU, 64) tb.add_inventory(rp2, orc.VCPU, 64) tb.add_inventory(rp3, orc.VCPU, 64) tb.add_inventory(rp4, orc.VCPU, 64) tb.add_inventory(rp5, orc.VCPU, 64) resources = {orc.VCPU: 4} rg_ctx = _req_group_search_context( self.ctx, resources=resources, member_of=[[uuids.agg1]], ) expected_rp = [rp1, rp4] res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual(set((rp.id, rp.id) for rp in expected_rp), set(res)) rg_ctx = _req_group_search_context( self.ctx, resources=resources, member_of=[[uuids.agg1, uuids.agg2]], ) expected_rp = [rp1, rp2, rp4] res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual(set((rp.id, rp.id) for rp in expected_rp), set(res)) rg_ctx = _req_group_search_context( self.ctx, resources=resources, member_of=[[uuids.agg1, uuids.agg2], [uuids.agg4]], ) expected_rp = [rp4] res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual(set((rp.id, rp.id) for rp in expected_rp), set(res)) rg_ctx = _req_group_search_context( self.ctx, resources=resources, forbidden_aggs=[uuids.agg1], ) expected_rp = [rp2, rp3, rp5] res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual(set((rp.id, rp.id) for rp in expected_rp), set(res)) rg_ctx = _req_group_search_context( self.ctx, resources=resources, forbidden_aggs=[uuids.agg1, uuids.agg2], ) expected_rp = [rp3, rp5] res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual(set((rp.id, rp.id) for rp in expected_rp), set(res)) rg_ctx = _req_group_search_context( self.ctx, resources=resources, member_of=[[uuids.agg1, uuids.agg2]], forbidden_aggs=[uuids.agg3, uuids.agg4], ) expected_rp = [rp1] res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual(set((rp.id, rp.id) for rp in expected_rp), set(res)) rg_ctx = _req_group_search_context( self.ctx, resources=resources, member_of=[[uuids.agg1]], forbidden_aggs=[uuids.agg1], ) expected_rp = [] res = res_ctx.get_provider_ids_matching(rg_ctx) self.assertEqual(set((rp.id, rp.id) for rp in expected_rp), set(res)) def test_get_provider_ids_having_all_traits(self): def run(required_traits, expected_ids): # translate trait names to trait ids in the nested structure required_traits = [ { self.ctx.trait_cache.id_from_string(trait) for trait in any_traits } for any_traits in required_traits ] obs = res_ctx.provider_ids_matching_required_traits( self.ctx, required_traits) self.assertEqual(sorted(expected_ids), sorted(obs)) # No traits. This will never be returned, because it's illegal to # invoke the method with no traits. self._create_provider('cn1') # One trait cn2 = self._create_provider('cn2') tb.set_traits(cn2, 'HW_CPU_X86_TBM') # One the same as cn2 cn3 = self._create_provider('cn3') tb.set_traits(cn3, 'HW_CPU_X86_TBM', 'HW_CPU_X86_TSX', 'HW_CPU_X86_SGX') # Disjoint cn4 = self._create_provider('cn4') tb.set_traits(cn4, 'HW_CPU_X86_SSE2', 'HW_CPU_X86_SSE3', 'CUSTOM_FOO') # Request with no traits not allowed self.assertRaises( ValueError, res_ctx.provider_ids_matching_required_traits, self.ctx, None) self.assertRaises( ValueError, res_ctx.provider_ids_matching_required_traits, self.ctx, []) # Common trait returns both RPs having it run([{'HW_CPU_X86_TBM'}], [cn2.id, cn3.id]) # Just the one run([{'HW_CPU_X86_TSX'}], [cn3.id]) run([{'HW_CPU_X86_TSX'}, {'HW_CPU_X86_SGX'}], [cn3.id]) run([{'CUSTOM_FOO'}], [cn4.id]) # Including the common one still just gets me cn3 run([{'HW_CPU_X86_TBM'}, {'HW_CPU_X86_SGX'}], [cn3.id]) run( [{'HW_CPU_X86_TBM'}, {'HW_CPU_X86_TSX'}, {'HW_CPU_X86_SGX'}], [cn3.id]) # Can't be satisfied run([{'HW_CPU_X86_TBM'}, {'HW_CPU_X86_TSX'}, {'CUSTOM_FOO'}], []) run([{'HW_CPU_X86_TBM'}, {'HW_CPU_X86_TSX'}, {'HW_CPU_X86_SGX'}, {'CUSTOM_FOO'}], []) run([{'HW_CPU_X86_SGX'}, {'HW_CPU_X86_SSE3'}], []) run([{'HW_CPU_X86_TBM'}, {'CUSTOM_FOO'}], []) run([{'HW_CPU_X86_BMI'}], []) trait_obj.Trait(self.ctx, name='CUSTOM_BAR').create() run([{'CUSTOM_BAR'}], []) # now let's use traits with OR relationships as well run([{'HW_CPU_X86_TBM', 'HW_CPU_X86_TSX'}], [cn2.id, cn3.id]) run([{'HW_CPU_X86_TBM', 'HW_CPU_X86_SSE2'}], [cn2.id, cn3.id, cn4.id]) run([{'HW_CPU_X86_TSX', 'CUSTOM_FOO'}], [cn3.id, cn4.id]) run( [{'HW_CPU_X86_TBM', 'HW_CPU_X86_TSX', 'CUSTOM_FOO'}], [cn2.id, cn3.id, cn4.id]) trait_obj.Trait(self.ctx, name='CUSTOM_BAZ').create() run([{'CUSTOM_BAR', 'CUSTOM_BAZ'}], []) run([{'HW_CPU_X86_TBM', 'HW_CPU_X86_SSE2'}, {'CUSTOM_BAR'}], []) run([{'HW_CPU_X86_TBM'}, {'HW_CPU_X86_TSX', 'CUSTOM_FOO'}], [cn3.id]) class ProviderTreeDBHelperTestCase(tb.PlacementDbBaseTestCase): def _get_rp_ids_matching_names(self, names): """Utility function to look up resource provider IDs from a set of supplied provider names directly from the API DB. """ names = map(str, names) sel = sa.select(rp_obj._RP_TBL.c.id) sel = sel.where(rp_obj._RP_TBL.c.name.in_(names)) with self.placement_db.get_engine().connect() as conn: rp_ids = set([r[0] for r in conn.execute(sel)]) return rp_ids def test_get_trees_matching_all(self): """Creates a few provider trees having different inventories and allocations and tests the get_trees_matching_all_resources() utility function to ensure that matching trees and resource providers are returned. """ def _run_test(expected_trees, expected_rps, **kwargs): """Helper function to validate the test result""" # NOTE(jaypipes): get_trees_matching_all() expects a dict of # resource class internal identifiers, not string names if not expected_trees: try: self.assertRaises(exception.ResourceProviderNotFound, _req_group_search_context, self.ctx, **kwargs) return except Exception: pass rg_ctx = _req_group_search_context(self.ctx, **kwargs) rw_ctx = res_ctx.RequestWideSearchContext( self.ctx, placement_lib.RequestWideParams(), True) results = res_ctx.get_trees_matching_all(rg_ctx, rw_ctx) tree_ids = self._get_rp_ids_matching_names(expected_trees) rp_ids = self._get_rp_ids_matching_names(expected_rps) self.assertEqual(tree_ids, results.trees) self.assertEqual(rp_ids, results.rps) # Before we even set up any providers, verify that the short-circuits # work to return empty lists _run_test([], []) # We are setting up 3 trees of providers that look like this: # # compute node (cn) # / \ # / \ # numa cell 0 numa cell 1 # | | # | | # pf 0 pf 1 # for x in ('1', '2', '3'): name = 'cn' + x cn = self._create_provider(name) tb.add_inventory(cn, orc.VCPU, 16) tb.add_inventory(cn, orc.MEMORY_MB, 32768) name = 'cn' + x + '_numa0' numa_cell0 = self._create_provider(name, parent=cn.uuid) name = 'cn' + x + '_numa1' numa_cell1 = self._create_provider(name, parent=cn.uuid) name = 'cn' + x + '_numa0_pf0' pf0 = self._create_provider(name, parent=numa_cell0.uuid) tb.add_inventory(pf0, orc.SRIOV_NET_VF, 8) name = 'cn' + x + '_numa1_pf1' pf1 = self._create_provider(name, parent=numa_cell1.uuid) tb.add_inventory(pf1, orc.SRIOV_NET_VF, 8) if x == '1': # Associate the first compute node with agg1 and agg2 cn.set_aggregates([uuids.agg1, uuids.agg2]) if x == '2': # Associate the second PF on the second compute node with agg2 pf1.set_aggregates([uuids.agg2]) if x == '3': # Associate the first compute node with agg2 and agg3 cn.set_aggregates([uuids.agg2, uuids.agg3]) # Associate the second PF on the second compute node with agg4 pf1.set_aggregates([uuids.agg4]) # Mark the second PF on the third compute node as having # GENEVE offload enabled tb.set_traits(pf1, os_traits.HW_NIC_OFFLOAD_GENEVE) # Doesn't really make a whole lot of logical sense, but allows # us to test situations where the same trait is associated with # multiple providers in the same tree and one of the providers # has inventory we will use... tb.set_traits(cn, os_traits.HW_NIC_OFFLOAD_GENEVE) # First, we test that all the candidates are returned expected_trees = ['cn1', 'cn2', 'cn3'] expected_rps = ['cn1', 'cn1_numa0_pf0', 'cn1_numa1_pf1', 'cn2', 'cn2_numa0_pf0', 'cn2_numa1_pf1', 'cn3', 'cn3_numa0_pf0', 'cn3_numa1_pf1'] _run_test(expected_trees, expected_rps) # Let's see if the tree_root_id filter works expected_trees = ['cn1'] expected_rps = ['cn1', 'cn1_numa0_pf0', 'cn1_numa1_pf1'] _run_test(expected_trees, expected_rps, in_tree=uuids.cn1) # Let's see if the aggregate filter works # 1. rps in agg1 # All rps under cn1 should be included because aggregate on a root # spans the whole tree member_of = [[uuids.agg1]] expected_trees = ['cn1'] expected_rps = ['cn1', 'cn1_numa0_pf0', 'cn1_numa1_pf1'] _run_test(expected_trees, expected_rps, member_of=member_of) # 2. rps in agg2 # cn2 doesn't come up because while cn2_numa1_pf1 is in agg2, aggs on # non-root does NOT span the whole tree. Thus cn2 can't provide VCPU # or MEMORY_MB resource member_of = [[uuids.agg2]] expected_trees = ['cn1', 'cn3'] expected_rps = ['cn1', 'cn1_numa0_pf0', 'cn1_numa1_pf1', 'cn3', 'cn3_numa0_pf0', 'cn3_numa1_pf1'] _run_test(expected_trees, expected_rps, member_of=member_of) # 3. rps in agg1 or agg3 # cn1 in agg1 and cn3 in agg3 comes up member_of = [[uuids.agg1, uuids.agg3]] expected_trees = ['cn1', 'cn3'] expected_rps = ['cn1', 'cn1_numa0_pf0', 'cn1_numa1_pf1', 'cn3', 'cn3_numa0_pf0', 'cn3_numa1_pf1'] _run_test(expected_trees, expected_rps, member_of=member_of) # 4. rps in (agg1 or agg2) and (agg3) # cn1 is not in agg3 member_of = [[uuids.agg1, uuids.agg2], [uuids.agg3]] expected_trees = ['cn3'] expected_rps = ['cn3', 'cn3_numa0_pf0', 'cn3_numa1_pf1'] _run_test(expected_trees, expected_rps, member_of=member_of) # 5. rps not in agg1 # All rps under cn1 are excluded forbidden_aggs = [uuids.agg1] expected_trees = ['cn2', 'cn3'] expected_rps = ['cn2', 'cn2_numa0_pf0', 'cn2_numa1_pf1', 'cn3', 'cn3_numa0_pf0', 'cn3_numa1_pf1'] _run_test(expected_trees, expected_rps, forbidden_aggs=forbidden_aggs) # 6. rps not in agg2 # All rps under cn1, under cn3 and pf1 on cn2 are excluded forbidden_aggs = [uuids.agg2] expected_trees = ['cn2'] expected_rps = ['cn2', 'cn2_numa0_pf0'] _run_test(expected_trees, expected_rps, forbidden_aggs=forbidden_aggs) # 7. rps neither in agg1 nor in agg4 # All rps under cn1 and pf1 on cn3 are excluded forbidden_aggs = [uuids.agg1, uuids.agg4] expected_trees = ['cn2', 'cn3'] expected_rps = ['cn2', 'cn2_numa0_pf0', 'cn2_numa1_pf1', 'cn3', 'cn3_numa0_pf0'] _run_test(expected_trees, expected_rps, forbidden_aggs=forbidden_aggs) # 8. rps in agg3 and neither in agg1 nor in agg4 # cn2 is not in agg3 so excluded member_of = [[uuids.agg3]] forbidden_aggs = [uuids.agg1, uuids.agg4] expected_trees = ['cn3'] expected_rps = ['cn3', 'cn3_numa0_pf0'] _run_test(expected_trees, expected_rps, member_of=member_of, forbidden_aggs=forbidden_aggs) # 9. rps in agg1 or agg3 and not in agg3 # ...which means rps in agg1 but not in agg3 member_of = [[uuids.agg1, uuids.agg3]] forbidden_aggs = [uuids.agg3] expected_trees = ['cn1'] expected_rps = ['cn1', 'cn1_numa0_pf0', 'cn1_numa1_pf1'] _run_test(expected_trees, expected_rps, member_of=member_of, forbidden_aggs=forbidden_aggs) # 10. rps in agg1 and not in agg1 # ...which results in no rp member_of = [[uuids.agg1]] forbidden_aggs = [uuids.agg1] expected_trees = [] expected_rps = [] _run_test(expected_trees, expected_rps, member_of=member_of, forbidden_aggs=forbidden_aggs) # OK, now consume all the VFs in the second compute node and verify # only the first and third computes are returned as root providers from # get_trees_matching_all() cn2_pf0 = rp_obj.ResourceProvider.get_by_uuid(self.ctx, uuids.cn2_numa0_pf0) self.allocate_from_provider(cn2_pf0, orc.SRIOV_NET_VF, 8) cn2_pf1 = rp_obj.ResourceProvider.get_by_uuid(self.ctx, uuids.cn2_numa1_pf1) self.allocate_from_provider(cn2_pf1, orc.SRIOV_NET_VF, 8) # cn2 had all its VFs consumed, so we should only get cn1 and cn3's IDs # as the root provider IDs. expected_trees = ['cn1', 'cn3'] expected_rps = ['cn1', 'cn1_numa0_pf0', 'cn1_numa1_pf1', 'cn3', 'cn3_numa0_pf0', 'cn3_numa1_pf1'] _run_test(expected_trees, expected_rps) # OK, now we're going to add a required trait to the mix. The only # provider that is decorated with the HW_NIC_OFFLOAD_GENEVE trait is # the second physical function on the third compute host. So we should # only get the third compute node back if we require that trait geneve_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.HW_NIC_OFFLOAD_GENEVE) req_traits = [{geneve_t.name}] expected_trees = ['cn3'] # NOTE(tetsuro): Actually we also get providers without traits here. # This is reported as bug#1771707 and from users' view the bug is now # fixed out of this get_trees_matching_all() function by checking # traits later again in _check_traits_for_alloc_request(). # But ideally, we'd like to have only pf1 from cn3 here using SQL # query in get_trees_matching_all() function for optimization. # provider_names = ['cn3', 'cn3_numa1_pf1'] expected_rps = ['cn3', 'cn3_numa0_pf0', 'cn3_numa1_pf1'] _run_test(expected_trees, expected_rps, required_traits=req_traits) # Add in a required trait that no provider has associated with it and # verify that there are no returned allocation candidates avx2_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.HW_CPU_X86_AVX2) req_traits = [{geneve_t.name}, {avx2_t.name}] _run_test([], [], required_traits=req_traits) # If we add the AVX2 trait as forbidden, not required, then we # should get back the original cn3 req_traits = [{geneve_t.name}] forbidden_traits = { avx2_t.name: avx2_t.id, } expected_trees = ['cn3'] # NOTE(tetsuro): Actually we also get providers without traits here. # This is reported as bug#1771707 and from users' view the bug is now # fixed out of this get_trees_matching_all() function by checking # traits later again in _check_traits_for_alloc_request(). # But ideally, we'd like to have only pf1 from cn3 here using SQL # query in get_trees_matching_all() function for optimization. # provider_names = ['cn3', 'cn3_numa1_pf1'] expected_rps = ['cn3', 'cn3_numa0_pf0', 'cn3_numa1_pf1'] _run_test(expected_trees, expected_rps, required_traits=req_traits, forbidden_traits=forbidden_traits) # Consume all the VFs in first and third compute nodes and verify # no more providers are returned cn1_pf0 = rp_obj.ResourceProvider.get_by_uuid(self.ctx, uuids.cn1_numa0_pf0) self.allocate_from_provider(cn1_pf0, orc.SRIOV_NET_VF, 8) cn1_pf1 = rp_obj.ResourceProvider.get_by_uuid(self.ctx, uuids.cn1_numa1_pf1) self.allocate_from_provider(cn1_pf1, orc.SRIOV_NET_VF, 8) cn3_pf0 = rp_obj.ResourceProvider.get_by_uuid(self.ctx, uuids.cn3_numa0_pf0) self.allocate_from_provider(cn3_pf0, orc.SRIOV_NET_VF, 8) cn3_pf1 = rp_obj.ResourceProvider.get_by_uuid(self.ctx, uuids.cn3_numa1_pf1) self.allocate_from_provider(cn3_pf1, orc.SRIOV_NET_VF, 8) _run_test([], [], required_traits=req_traits, forbidden_traits=forbidden_traits) def _make_trees_with_traits(self): # We are setting up 6 trees of providers with following traits: # # compute node (cn) # / \ # pf 0 pf 1 # # +-----+----------------+---------------------+---------------------+ # | | cn | pf0 | pf1 | # +-----+----------------+---------------------+---------------------+ # |tree1|HW_CPU_X86_AVX2 | |HW_NIC_OFFLOAD_GENEVE| # +-----+----------------+---------------------+---------------------+ # |tree2|STORAGE_DISK_SSD| | | # +-----+----------------+---------------------+---------------------+ # |tree3|HW_CPU_X86_AVX2 | | | # | |STORAGE_DISK_SSD| | | # +-----+----------------+---------------------+---------------------+ # |tree4| |HW_NIC_ACCEL_SSL | | # | | |HW_NIC_OFFLOAD_GENEVE| | # +-----+----------------+---------------------+---------------------+ # |tree5| |HW_NIC_ACCEL_SSL |HW_NIC_OFFLOAD_GENEVE| # +-----+----------------+---------------------+---------------------+ # |tree6| |HW_NIC_ACCEL_SSL |HW_NIC_ACCEL_SSL | # +-----+----------------+---------------------+---------------------+ # |tree7| | | | # +-----+----------------+---------------------+---------------------+ # rp_ids = set() for x in ('1', '2', '3', '4', '5', '6', '7'): name = 'cn' + x cn = self._create_provider(name) name = 'cn' + x + '_pf0' pf0 = self._create_provider(name, parent=cn.uuid) name = 'cn' + x + '_pf1' pf1 = self._create_provider(name, parent=cn.uuid) rp_ids |= set([cn.id, pf0.id, pf1.id]) if x == '1': tb.set_traits(cn, os_traits.HW_CPU_X86_AVX2) tb.set_traits(pf1, os_traits.HW_NIC_OFFLOAD_GENEVE) if x == '2': tb.set_traits(cn, os_traits.STORAGE_DISK_SSD) if x == '3': tb.set_traits(cn, os_traits.HW_CPU_X86_AVX2, os_traits.STORAGE_DISK_SSD) if x == '4': tb.set_traits(pf0, os_traits.HW_NIC_ACCEL_SSL, os_traits.HW_NIC_OFFLOAD_GENEVE) if x == '5': tb.set_traits(pf0, os_traits.HW_NIC_ACCEL_SSL) tb.set_traits(pf1, os_traits.HW_NIC_OFFLOAD_GENEVE) if x == '6': tb.set_traits(pf0, os_traits.HW_NIC_ACCEL_SSL) tb.set_traits(pf1, os_traits.HW_NIC_ACCEL_SSL) avx2_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.HW_CPU_X86_AVX2) ssd_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.STORAGE_DISK_SSD) geneve_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.HW_NIC_OFFLOAD_GENEVE) ssl_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.HW_NIC_ACCEL_SSL) return rp_ids, avx2_t, ssd_t, geneve_t, ssl_t def test_get_trees_with_traits(self): """Creates a few provider trees having different traits and tests the _get_trees_with_traits() utility function to ensure that only the root provider IDs of matching traits are returned. """ rp_ids, avx2_t, ssd_t, geneve_t, ssl_t = self._make_trees_with_traits() # Case1: required on root required_traits = [{avx2_t.id}] forbidden_traits = {} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) tree_root_ids = set([p[1] for p in rp_tuples_with_trait]) provider_names = ['cn1', 'cn3'] expect_root_ids = self._get_rp_ids_matching_names(provider_names) self.assertEqual(expect_root_ids, tree_root_ids) # Case1': required on root with forbidden traits # Let's validate that cn3 disappears required_traits = [{avx2_t.id}] forbidden_traits = {ssd_t.id} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) tree_root_ids = set([p[1] for p in rp_tuples_with_trait]) provider_names = ['cn1'] expect_root_ids = self._get_rp_ids_matching_names(provider_names) self.assertEqual(expect_root_ids, tree_root_ids) # Case2: multiple required on root required_traits = [{avx2_t.id}, {ssd_t.id}] forbidden_traits = {} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) tree_root_ids = set([p[1] for p in rp_tuples_with_trait]) provider_names = ['cn3'] expect_root_ids = self._get_rp_ids_matching_names(provider_names) self.assertEqual(expect_root_ids, tree_root_ids) # Case3: required on child required_traits = [{geneve_t.id}] forbidden_traits = {} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) tree_root_ids = set([p[1] for p in rp_tuples_with_trait]) provider_names = ['cn1', 'cn4', 'cn5'] expect_root_ids = self._get_rp_ids_matching_names(provider_names) self.assertEqual(expect_root_ids, tree_root_ids) # Case3': required on child with forbidden traits # Let's validate that cn4 disappears required_traits = [{geneve_t.id}] forbidden_traits = {ssl_t.id} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) tree_root_ids = set([p[1] for p in rp_tuples_with_trait]) provider_names = ['cn1', 'cn5'] expect_root_ids = self._get_rp_ids_matching_names(provider_names) self.assertEqual(expect_root_ids, tree_root_ids) # Case4: multiple required on child required_traits = [{geneve_t.id}, {ssl_t.id}] forbidden_traits = {} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) tree_root_ids = set([p[1] for p in rp_tuples_with_trait]) provider_names = ['cn4', 'cn5'] expect_root_ids = self._get_rp_ids_matching_names(provider_names) self.assertEqual(expect_root_ids, tree_root_ids) # Case5: required on root and child required_traits = [{avx2_t.id}, {geneve_t.id}] forbidden_traits = {} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) tree_root_ids = set([p[1] for p in rp_tuples_with_trait]) provider_names = ['cn1'] expect_root_ids = self._get_rp_ids_matching_names(provider_names) self.assertEqual(expect_root_ids, tree_root_ids) def test_get_trees_with_traits_forbidden_1(self): """Using the following tree cn1 CUSTOM_FOO | cn1_c1 """ cn1 = self._create_provider('cn1') cn1_c1 = self._create_provider('cn1_c1', parent=cn1.uuid) tb.set_traits(cn1, 'CUSTOM_FOO') custom_foo = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_FOO') required_traits = [] forbidden_traits = {custom_foo.id} rp_ids = {cn1.id, cn1_c1.id} # both RP from the tree rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) # tree is returned as the forbidden trait did not filter out all the # rps from the tree. The tree might still be a match to the request # via cn1_c1 self.assertEqual( {(cn1.id, cn1.id), (cn1_c1.id, cn1.id)}, rp_tuples_with_trait ) # simulate that cn1_c1 already filtered out by other filters rp_ids = {cn1.id} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) # the tree is not returned any more as the only considered rp is cn1 # but that has a forbidden trait self.assertEqual(set(), rp_tuples_with_trait) def test_get_trees_with_traits_forbidden_2(self): """Using the following tree cn1 CUSTOM_FOO | cn1_c1 CUSTOM_FOO """ cn1 = self._create_provider('cn1') cn1_c1 = self._create_provider('cn1_c1', parent=cn1.uuid) tb.set_traits(cn1, 'CUSTOM_FOO') custom_foo = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_FOO') tb.set_traits(cn1_c1, 'CUSTOM_FOO') required_traits = [] forbidden_traits = {custom_foo.id} rp_ids = {cn1.id, cn1_c1.id} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) # now both rp from the tree is filtered out by the forbidden trait # so the tree is filtered out self.assertEqual(set(), rp_tuples_with_trait) def test_get_trees_with_traits_forbidden_3(self): """Using the following tree cn1 CUSTOM_FOO, CUSTOM_BAR | cn1_c1 """ cn1 = self._create_provider('cn1') cn1_c1 = self._create_provider('cn1_c1', parent=cn1.uuid) tb.set_traits(cn1, 'CUSTOM_FOO', 'CUSTOM_BAR') custom_foo = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_FOO') custom_bar = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_BAR') required_traits = [{custom_bar.id}] forbidden_traits = {custom_foo.id} rp_ids = {cn1.id, cn1_c1.id} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) # only cn1 could provide the required trait but cn1 also has the # forbidden trait. The rest of the tree does not provide the required # trait so this tree cannot be a match for the request self.assertEqual(set(), rp_tuples_with_trait) # simulate that cn1_c1 already filtered out by other filters rp_ids = {cn1.id} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) # only cn1 could provide the required trait but cn1 also has the # forbidden trait. There is no other rps in the tree to be considered. self.assertEqual(set(), rp_tuples_with_trait) def make_tree_for_any_traits(self, tree_index, trait_list): """Create an RP tree with traits CNx / \ CNx_C1 CNx_C2 | CNx_C1_GC1 """ cn_name = f'cn{tree_index}' cn = self._create_provider(cn_name) cn_c1 = self._create_provider(cn_name + 'c1', parent=cn.uuid) cn_c1_gc1 = self._create_provider( cn_name + 'c1_gc1', parent=cn_c1.uuid) cn_c2 = self._create_provider(cn_name + 'c2', parent=cn.uuid) rps = [cn, cn_c1, cn_c2, cn_c1_gc1] for rp, traits in zip(rps, trait_list): tb.set_traits(rp, *traits) return [(rp.id, cn.id) for rp in rps] def make_trees_with_traits_for_any_traits(self, rp_trait_list): rp_ids = [] for index, rp_traits in rp_trait_list: rp_ids += self.make_tree_for_any_traits(index, rp_traits) return rp_ids def test_get_trees_with_traits_any_traits(self): """We are setting up multiple RP trees with the same structure but with different traits. The structure is CNx / \ CNx_C1 CNx_C2 | CNx_C1_GC1 The required trait query is ((A or B) and C). Then we assert that only the matching trees are returned. """ a = 'CUSTOM_A' b = 'CUSTOM_B' c = 'CUSTOM_C' # autopep8: off matching_trees = [ # CN C1 C2 C1_GC1 (1, [[a, b, c], [], [], [], ], ), # noqa (2, [[a, c], [b], [], [], ], ), # noqa (3, [[a], [b, c], [], [], ], ), # noqa (4, [[a], [b], [c], [], ], ), # noqa (5, [[c], [b], [a], [], ], ), # noqa (6, [[], [a], [b], [c], ], ), # noqa (7, [[c], [], [a, b], [], ], ), # noqa (8, [[c], [], [], [a, b], ], ), # noqa (9, [[a, b], [b], [a], [c], ], ), # noqa (10, [[b, c], [], [], [], ], ), # noqa (11, [[c], [a], [], [], ], ), # noqa (12, [[a], [], [c], [], ], ), # noqa (13, [[b], [], [], [c], ], ), # noqa (14, [[], [b], [], [c], ], ), # noqa ] non_matching_trees = [ # CN C1 C2 C1_GC1 (15, [[a, b], [], [], [], ], ), # noqa (16, [[], [a], [], [b], ], ), # noqa (17, [[c], [], [], [], ], ), # noqa (18, [[], [c], [], [], ], ), # noqa (19, [[], [], [a], [], ], ), # noqa ] # autopep8: on matching_rp_ids = self.make_trees_with_traits_for_any_traits( matching_trees) non_matching_rp_ids = self.make_trees_with_traits_for_any_traits( non_matching_trees) trait_a = trait_obj.Trait.get_by_name(self.ctx, a).id trait_b = trait_obj.Trait.get_by_name(self.ctx, b).id trait_c = trait_obj.Trait.get_by_name(self.ctx, c).id # (A or B) and C required_traits = [{trait_a, trait_b}, {trait_c}] rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, {rp_id for rp_id, _ in matching_rp_ids + non_matching_rp_ids}, required_traits, {} ) # we check that every RP from every tree we expected to match is # returned and none of the RPs from the other trees are returned self.assertEqual(set(matching_rp_ids), rp_tuples_with_trait) def test_get_trees_with_traits_any_traits_forbidden(self): """Query RP trees with complex trait query involving both AND and OR and forbidden traits We use the following tree structure for these test with specific traits. CN1 CUSTOM_A / \ CN1_C1 CN1_C2 CUSTOM_B, | CN1_C1_GC1 CUSTOM_C And each node has one extra custom trait with its own name so the test can easily forbid one or more RPs directly from the tree. We use the formula (CUSTOM_A or CUSTOM_B) and CUSTOM_C) in this test. Then we do the following cases where forbidden traits remove RPs: 1) with an unnecessary trait -> OK 2) with one side of an OR -> OK 3) with both side of an OR -> NOK 4) with one side of an AND -> NOK """ cn1 = self._create_provider('cn1') tb.set_traits(cn1, 'CUSTOM_A', 'CUSTOM_CN1') cn1_c1 = self._create_provider('cn1_c1', parent=cn1.uuid) tb.set_traits(cn1_c1, 'CUSTOM_CN1_C1') cn1_c1_gc1 = self._create_provider('cn1_c1_gc1', parent=cn1_c1.uuid) tb.set_traits(cn1_c1_gc1, 'CUSTOM_C', 'CUSTOM_CN1_C1_GC1') cn1_c2 = self._create_provider('cn1_c2', parent=cn1.uuid) tb.set_traits(cn1_c2, 'CUSTOM_B', 'CUSTOM_CN1_C2') trait_a = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_A').id trait_b = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_B').id trait_c = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_C').id trait_cn1 = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_CN1').id trait_cn1_c1 = trait_obj.Trait.get_by_name( self.ctx, 'CUSTOM_CN1_C1').id trait_cn1_c1_gc1 = trait_obj.Trait.get_by_name( self.ctx, 'CUSTOM_CN1_C1_GC1').id trait_cn1_c2 = trait_obj.Trait.get_by_name( self.ctx, 'CUSTOM_CN1_C2').id rp_ids = {cn1.id, cn1_c1.id, cn1_c1_gc1.id, cn1_c2.id} expected_whole_tree = {(rp_id, cn1.id) for rp_id in rp_ids} # (A or B) and C required_traits = [{trait_a, trait_b}, {trait_c}] # 1) forbid CN1_C1 but that is not needed forbidden_traits = {trait_cn1_c1} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) self.assertEqual(expected_whole_tree, rp_tuples_with_trait) # 2) forbid CN1_C2 which has trait B. But trait A is also enough, and # we have that on CN1 so this should still match forbidden_traits = {trait_cn1_c2} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) self.assertEqual(expected_whole_tree, rp_tuples_with_trait) # 3) forbid CN1 and CN1_C2. This means neither trait A nor B is # available so this is expected to not produce a match forbidden_traits = {trait_cn1_c2, trait_cn1} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) self.assertEqual(set(), rp_tuples_with_trait) # 4) forbid CN1_C1_GC1. This means neither trait C is not available. # So (A or B) and C cannot be fulfilled. forbidden_traits = {trait_cn1_c1_gc1} rp_tuples_with_trait = res_ctx._get_trees_with_traits( self.ctx, rp_ids, required_traits, forbidden_traits) self.assertEqual(set(), rp_tuples_with_trait) def test_get_roots_with_traits(self): _, avx2_t, ssd_t, geneve_t, ssl_t = self._make_trees_with_traits() def do_test(required=None, forbidden=None, expected=None): actual = res_ctx._get_roots_with_traits( self.ctx, set(trait.id for trait in required or []), set(trait.id for trait in forbidden or [])) if expected: expected = self._get_rp_ids_matching_names( 'cn%d' % d for d in expected) self.assertEqual(expected or set(), actual) # One of required/forbidden must be specified self.assertRaises(ValueError, do_test) # AVX2 is on cn1 and cn3 do_test(required=[avx2_t], expected=(1, 3)) # Multiple required do_test(required=[avx2_t, ssd_t], expected=(3,)) # No match on roots for a trait on children do_test(required=[geneve_t]) # ...even if including a trait also on roots do_test(required=[geneve_t, ssd_t]) # Forbid traits not on any roots. These are on non-root providers... do_test(forbidden=[geneve_t, ssl_t], expected=(1, 2, 3, 4, 5, 6, 7)) # ...and this one is nowhere in the environment. hdd_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.STORAGE_DISK_HDD) do_test(forbidden=[hdd_t], expected=(1, 2, 3, 4, 5, 6, 7)) # Forbid traits just on roots do_test(forbidden=[avx2_t, ssd_t], expected=(4, 5, 6, 7)) # Forbid traits on roots and children do_test(forbidden=[ssd_t, ssl_t, geneve_t], expected=(1, 4, 5, 6, 7)) # Required & forbidden both on roots do_test(required=[avx2_t], forbidden=[ssd_t], expected=(1,)) # Same, but adding forbidden not on roots has no effect do_test(required=[avx2_t], forbidden=[ssd_t, ssl_t], expected=(1,)) # Required on roots, forbidden only on children do_test( required=[avx2_t, ssd_t], forbidden=[ssl_t, geneve_t], expected=(3,)) # Required & forbidden overlap. No results because it is impossible for # one provider to both have and not have a trait. (Unreachable in real # life due to conflict check in the handler.) do_test(required=[avx2_t, ssd_t], forbidden=[ssd_t, geneve_t]) class AllocationCandidatesTestCase(tb.PlacementDbBaseTestCase): """Tests a variety of scenarios with both shared and non-shared resource providers that the AllocationCandidates.get_by_requests() method returns a set of alternative allocation requests and provider summaries that may be used by the scheduler to sort/weigh the options it has for claiming resources against providers. """ def setUp(self): super(AllocationCandidatesTestCase, self).setUp() self.requested_resources = { orc.VCPU: 1, orc.MEMORY_MB: 64, orc.DISK_GB: 1500, } # For debugging purposes, populated by _create_provider and used by # _validate_allocation_requests to make failure results more readable. self.rp_uuid_to_name = {} def _get_allocation_candidates(self, groups=None, rqparams=None): if groups is None: groups = {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources)} if rqparams is None: rqparams = placement_lib.RequestWideParams() return ac_obj.AllocationCandidates.get_by_requests( self.ctx, groups, rqparams) def _mappings_to_suffix(self, mappings): """Turn a dict of AllocationRequest mappings keyed on suffix to a dict, keyed by uuid, of lists of suffixes. """ suffixes_by_uuid = collections.defaultdict(set) for suffix, rps in mappings.items(): for rp_uuid in rps: suffixes_by_uuid[rp_uuid].add(suffix) listed_sorted_suffixes = {} for rp_uuid, suffixes in suffixes_by_uuid.items(): listed_sorted_suffixes[rp_uuid] = sorted(list(suffixes)) return listed_sorted_suffixes def _validate_allocation_requests(self, expected, candidates, expect_suffixes=False): """Assert correctness of allocation requests in allocation candidates. This is set up to make it easy for the caller to specify the expected result, to make that expected structure readable for someone looking at the test case, and to make test failures readable for debugging. :param expected: A list of lists of tuples representing the expected allocation requests, of the form: [ [(resource_provider_name, resource_class_name, resource_count), ..., ], ... ] :param candidates: The result from AllocationCandidates.get_by_requests :param expect_suffixes: If True, validate the AllocationRequest mappings in the results, found as a list of suffixes in 4th member of the tuple described above. """ # Extract/convert allocation requests from candidates observed = [] for ar in candidates.allocation_requests: suffixes_by_uuid = self._mappings_to_suffix(ar.mappings) rrs = [] for rr in ar.resource_requests: req_tuple = (self.rp_uuid_to_name[rr.resource_provider.uuid], rr.resource_class, rr.amount) if expect_suffixes: req_tuple = ( req_tuple + (suffixes_by_uuid[rr.resource_provider.uuid], )) rrs.append(req_tuple) rrs.sort() observed.append(rrs) observed.sort() # Sort the guts of the expected structure for rr in expected: rr.sort() expected.sort() # Now we ought to be able to compare them self.assertEqual(expected, observed) def _validate_provider_summary_resources(self, expected, candidates): """Assert correctness of the resources in provider summaries in allocation candidates. This is set up to make it easy for the caller to specify the expected result, to make that expected structure readable for someone looking at the test case, and to make test failures readable for debugging. :param expected: A dict, keyed by resource provider name, of sets of 3-tuples containing resource class, capacity, and amount used: { resource_provider_name: set([ (resource_class, capacity, used), ..., ]), ..., } :param candidates: The result from AllocationCandidates.get_by_requests """ observed = {} for psum in candidates.provider_summaries: rpname = self.rp_uuid_to_name[psum.resource_provider.uuid] reslist = set() for res in psum.resources: reslist.add((res.resource_class, res.capacity, res.used)) if rpname in observed: self.fail("Found resource provider %s more than once in " "provider_summaries!" % rpname) observed[rpname] = reslist # Now we ought to be able to compare them self.assertEqual(expected, observed) def _validate_provider_summary_traits(self, expected, candidates): """Assert correctness of the traits in provider summaries in allocation candidates. This is set up to make it easy for the caller to specify the expected result, to make that expected structure readable for someone looking at the test case, and to make test failures readable for debugging. :param expected: A dict, keyed by resource provider name, of sets of string trait names: { resource_provider_name: set([ trait_name, ... ]), ..., } :param candidates: The result from AllocationCandidates.get_by_requests """ observed = {} for psum in candidates.provider_summaries: rpname = self.rp_uuid_to_name[psum.resource_provider.uuid] observed[rpname] = set(psum.traits) self.assertEqual(expected, observed) def test_unknown_traits(self): missing = [{'UNKNOWN_TRAIT'}] requests = {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=missing)} self.assertRaises( exception.TraitNotFound, ac_obj.AllocationCandidates.get_by_requests, self.ctx, requests, placement_lib.RequestWideParams()) def test_allc_req_and_prov_summary(self): """Simply test with one resource provider that the allocation requests returned by AllocationCandidates have valid allocation_requests and provider_summaries. """ cn1 = self._create_provider('cn1') tb.add_inventory(cn1, orc.VCPU, 8) tb.add_inventory(cn1, orc.MEMORY_MB, 2048) tb.add_inventory(cn1, orc.DISK_GB, 2000) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 1 } )} ) expected = [ [('cn1', orc.VCPU, 1, [''])] ] self._validate_allocation_requests( expected, alloc_cands, expect_suffixes=True) expected = { 'cn1': set([ (orc.VCPU, 8, 0), (orc.MEMORY_MB, 2048, 0), (orc.DISK_GB, 2000, 0) ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_all_local(self): """Create some resource providers that can satisfy the request for resources with local (non-shared) resources and verify that the allocation requests returned by AllocationCandidates correspond with each of these resource providers. """ # Create three compute node providers with VCPU, RAM and local disk cn1, cn2, cn3 = (self._create_provider(name) for name in ('cn1', 'cn2', 'cn3')) for cn in (cn1, cn2, cn3): tb.add_inventory(cn, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(cn, orc.MEMORY_MB, 32768, min_unit=64, step_size=64, allocation_ratio=1.5) total_gb = 1000 if cn.name == 'cn3' else 2000 tb.add_inventory(cn, orc.DISK_GB, total_gb, reserved=100, min_unit=10, step_size=10, allocation_ratio=1.0) # Ask for the alternative placement possibilities and verify each # provider is returned alloc_cands = self._get_allocation_candidates() # Verify the provider summary information indicates 0 usage and # capacity calculated from above inventory numbers for the first two # compute nodes. The third doesn't show up because it lacks sufficient # disk capacity. expected = { 'cn1': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 32768 * 1.5, 0), (orc.DISK_GB, 2000 - 100, 0), ]), 'cn2': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 32768 * 1.5, 0), (orc.DISK_GB, 2000 - 100, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Verify the allocation requests that are returned. There should be 2 # allocation requests, one for each compute node, containing 3 # resources in each allocation request, one each for VCPU, RAM, and # disk. The amounts of the requests should correspond to the requested # resource amounts in the filter:resources dict passed to # AllocationCandidates.get_by_requests(). expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('cn1', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 1), ('cn2', orc.MEMORY_MB, 64), ('cn2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) # Now let's add traits into the mix. Currently, none of the compute # nodes has the AVX2 trait associated with it, so we should get 0 # results if we required AVX2 alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[{os_traits.HW_CPU_X86_AVX2}] )}, ) self._validate_allocation_requests([], alloc_cands) # If we then associate the AVX2 trait to just compute node 2, we should # get back just that compute node in the provider summaries tb.set_traits(cn2, 'HW_CPU_X86_AVX2') alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[{os_traits.HW_CPU_X86_AVX2}] )}, ) # Only cn2 should be in our allocation requests now since that's the # only one with the required trait expected = [ [('cn2', orc.VCPU, 1), ('cn2', orc.MEMORY_MB, 64), ('cn2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) p_sums = alloc_cands.provider_summaries self.assertEqual(1, len(p_sums)) expected = { 'cn2': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 32768 * 1.5, 0), (orc.DISK_GB, 2000 - 100, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) expected = { 'cn2': set(['HW_CPU_X86_AVX2']) } self._validate_provider_summary_traits(expected, alloc_cands) # Confirm that forbidden traits changes the results to get cn1. alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, forbidden_traits=set([os_traits.HW_CPU_X86_AVX2]) )}, ) expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('cn1', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) # Now create a more complex trait query: (AVX2 and (SSE or SSE2)) # First no result is expected as none of the RPs has SSE or SSE2 traits required_traits = [ {os_traits.HW_CPU_X86_AVX2}, {os_traits.HW_CPU_X86_SSE, os_traits.HW_CPU_X86_SSE2} ] alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, )}, ) self._validate_allocation_requests([], alloc_cands) # Next we add SSE to one of the RPs that has no AVX2, so we still # expect empty result tb.set_traits(cn1, 'HW_CPU_X86_SSE') alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, )}, ) self._validate_allocation_requests([], alloc_cands) # Next we add SSE2 to the cn2 where there are AVX2 too, and we expect # that cn2 is a match now tb.set_traits(cn2, 'HW_CPU_X86_AVX2', 'HW_CPU_X86_SSE2') alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, )}, ) expected = [ [('cn2', orc.VCPU, 1), ('cn2', orc.MEMORY_MB, 64), ('cn2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) p_sums = alloc_cands.provider_summaries self.assertEqual(1, len(p_sums)) expected = { 'cn2': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 32768 * 1.5, 0), (orc.DISK_GB, 2000 - 100, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Next forbid SSE2 in the request so the trait query becomes # (AVX2 and (SSE or SSE2) and !SSE2) this should lead to no candidate # as cn2 has SSE2 alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, forbidden_traits={'HW_CPU_X86_SSE2'}, )}, ) self._validate_allocation_requests([], alloc_cands) # But if we forbid SSE instead of SSE2 then we get back cn2 alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, forbidden_traits={'HW_CPU_X86_SSE'} )}, ) expected = [ [('cn2', orc.VCPU, 1), ('cn2', orc.MEMORY_MB, 64), ('cn2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) p_sums = alloc_cands.provider_summaries self.assertEqual(1, len(p_sums)) expected = { 'cn2': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 32768 * 1.5, 0), (orc.DISK_GB, 2000 - 100, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_all_local_limit(self): """Create some resource providers that can satisfy the request for resources with local (non-shared) resources, limit them, and verify that the allocation requests returned by AllocationCandidates correspond with each of these resource providers. """ # Create three compute node providers with VCPU, RAM and local disk for name in ('cn1', 'cn2', 'cn3'): cn = self._create_provider(name) tb.add_inventory(cn, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(cn, orc.MEMORY_MB, 32768, min_unit=64, step_size=64, allocation_ratio=1.5) total_gb = 1000 if name == 'cn3' else 2000 tb.add_inventory(cn, orc.DISK_GB, total_gb, reserved=100, min_unit=10, step_size=10, allocation_ratio=1.0) # Ask for just one candidate. limit = 1 alloc_cands = self._get_allocation_candidates( rqparams=placement_lib.RequestWideParams(limit=limit)) allocation_requests = alloc_cands.allocation_requests self.assertEqual(limit, len(allocation_requests)) # provider summaries should have only one rp self.assertEqual(limit, len(alloc_cands.provider_summaries)) # Do it again, with conf set to randomize. We can't confirm the # random-ness but we can be sure the code path doesn't explode. self.conf_fixture.config(randomize_allocation_candidates=True, group='placement') # Ask for two candidates. limit = 2 alloc_cands = self._get_allocation_candidates( rqparams=placement_lib.RequestWideParams(limit=limit)) allocation_requests = alloc_cands.allocation_requests self.assertEqual(limit, len(allocation_requests)) # provider summaries should have two rps self.assertEqual(limit, len(alloc_cands.provider_summaries)) # Do it again, asking for more than are available. limit = 5 # We still only expect 2 because cn3 does not match default requests. expected_length = 2 alloc_cands = self._get_allocation_candidates( rqparams=placement_lib.RequestWideParams(limit=limit)) allocation_requests = alloc_cands.allocation_requests self.assertEqual(expected_length, len(allocation_requests)) # provider summaries should have two rps self.assertEqual(expected_length, len(alloc_cands.provider_summaries)) def test_local_with_shared_disk(self): """Create some resource providers that can satisfy the request for resources with local VCPU and MEMORY_MB but rely on a shared storage pool to satisfy DISK_GB and verify that the allocation requests returned by AllocationCandidates have DISK_GB served up by the shared storage pool resource provider and VCPU/MEMORY_MB by the compute node providers """ # Create two compute node providers with VCPU, RAM and NO local disk, # associated with the aggregate. cn1, cn2 = (self._create_provider(name, uuids.agg) for name in ('cn1', 'cn2')) for cn in (cn1, cn2): tb.add_inventory(cn, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(cn, orc.MEMORY_MB, 1024, min_unit=64, allocation_ratio=1.5) # Create the shared storage pool, associated with the same aggregate ss = self._create_provider('shared storage', uuids.agg) # Give the shared storage pool some inventory of DISK_GB tb.add_inventory(ss, orc.DISK_GB, 2000, reserved=100, min_unit=10) # Mark the shared storage pool as having inventory shared among any # provider associated via aggregate tb.set_traits(ss, "MISC_SHARES_VIA_AGGREGATE") # Ask for the alternative placement possibilities and verify each # compute node provider is listed in the allocation requests as well as # the shared storage pool provider alloc_cands = self._get_allocation_candidates() # Verify the provider summary information indicates 0 usage and # capacity calculated from above inventory numbers for both compute # nodes and the shared provider. expected = { 'cn1': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'cn2': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'shared storage': set([ (orc.DISK_GB, 2000 - 100, 0) ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Verify the allocation requests that are returned. There should be 2 # allocation requests, one for each compute node, containing 3 # resources in each allocation request, one each for VCPU, RAM, and # disk. The amounts of the requests should correspond to the requested # resource amounts in the filter:resources dict passed to # AllocationCandidates.get_by_requests(). The providers for VCPU and # MEMORY_MB should be the compute nodes while the provider for the # DISK_GB should be the shared storage pool expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 1), ('cn2', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) # Test for bug #1705071. We query for allocation candidates with a # request for ONLY the DISK_GB (the resource that is shared with # compute nodes) and no VCPU/MEMORY_MB. Before the fix for bug # #1705071, this resulted in a KeyError alloc_cands = self._get_allocation_candidates( groups={'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'DISK_GB': 10, } )} ) # We should only have provider summary information for the sharing # storage provider, since that's the only provider that can be # allocated against for this request. In the future, we may look into # returning the shared-with providers in the provider summaries, but # that's a distant possibility. expected = { 'shared storage': set([ (orc.DISK_GB, 2000 - 100, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) # The allocation_requests will only include the shared storage # provider because the only thing we're requesting to allocate is # against the provider of DISK_GB, which happens to be the shared # storage provider. expected = [[('shared storage', orc.DISK_GB, 10)]] self._validate_allocation_requests(expected, alloc_cands) # Now we're going to add a set of required traits into the request mix. # To start off, let's request a required trait that we know has not # been associated yet with any provider, and ensure we get no results alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[{os_traits.HW_CPU_X86_AVX2}], )} ) # We have not yet associated the AVX2 trait to any provider, so we # should get zero allocation candidates p_sums = alloc_cands.provider_summaries self.assertEqual(0, len(p_sums)) # Now, if we then associate the required trait with both of our compute # nodes, we should get back both compute nodes since they both now # satisfy the required traits as well as the resource request avx2_t = trait_obj.Trait.get_by_name( self.ctx, os_traits.HW_CPU_X86_AVX2) cn1.set_traits([avx2_t]) cn2.set_traits([avx2_t]) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[{os_traits.HW_CPU_X86_AVX2}], )} ) # There should be 2 compute node providers and 1 shared storage # provider in the summaries. expected = { 'cn1': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'cn2': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'shared storage': set([ (orc.DISK_GB, 2000 - 100, 0) ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Let's check that the traits listed for the compute nodes include the # AVX2 trait, and the shared storage provider in the provider summaries # does NOT have the AVX2 trait. expected = { 'cn1': set(['HW_CPU_X86_AVX2']), 'cn2': set(['HW_CPU_X86_AVX2']), 'shared storage': set(['MISC_SHARES_VIA_AGGREGATE']), } self._validate_provider_summary_traits(expected, alloc_cands) # Forbid the AVX2 trait alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, forbidden_traits=set([os_traits.HW_CPU_X86_AVX2]), )} ) # Should be no results as both cn1 and cn2 have the trait. expected = [] self._validate_allocation_requests(expected, alloc_cands) # Require the AVX2 trait but forbid CUSTOM_EXTRA_FASTER, which is # added to cn2 tb.set_traits(cn2, 'HW_CPU_X86_AVX2', 'CUSTOM_EXTRA_FASTER') alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[{os_traits.HW_CPU_X86_AVX2}], forbidden_traits=set(['CUSTOM_EXTRA_FASTER']), )} ) expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) # Add disk to cn1, forbid sharing, and require the AVX2 trait. # This should result in getting only cn1. tb.add_inventory(cn1, orc.DISK_GB, 2048, allocation_ratio=1.5) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[{os_traits.HW_CPU_X86_AVX2}], forbidden_traits=set(['MISC_SHARES_VIA_AGGREGATE']), )} ) expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('cn1', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) # Now create a more complex trait query. (AVX2 and (SSE or SSE2) # Right now none of the RPs has SEE nor SSE2 so we expect no candidates required_traits = [ {os_traits.HW_CPU_X86_AVX2}, {os_traits.HW_CPU_X86_SSE, os_traits.HW_CPU_X86_SSE2} ] alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, )} ) # We have not yet associated the SSE or SSE2 traits to any provider, # so we should get zero allocation candidates p_sums = alloc_cands.provider_summaries self.assertEqual([], alloc_cands.allocation_requests) self.assertEqual(0, len(p_sums)) # Next associate SSE to the sharing provider that is enough to get # matches. cn1 with shared storage is a match as ss provides SSE but # cn1 with local disk is not a match as then ss is not used and # therefore no SSE is provided. cn2 is a match with ss. tb.set_traits(ss, "MISC_SHARES_VIA_AGGREGATE", "HW_CPU_X86_SSE") alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, )} ) expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 1), ('cn2', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) # Now add SSE2 to cn1 so cn1 + local disk will also be a match tb.set_traits(cn1, "HW_CPU_X86_AVX2", "HW_CPU_X86_SSE2") alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, )} ) expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('cn1', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 1), ('cn2', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) # Now change the trait query to # (AVX2 and (SSE or SSE2) and not CUSTOM_EXTRA_FASTER) # cn2 has the CUSTOM_EXTRA_FASTER trait so that is expected to be # filtered out alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, forbidden_traits={'CUSTOM_EXTRA_FASTER'}, )} ) expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('cn1', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) def test_local_with_shared_custom_resource(self): """Create some resource providers that can satisfy the request for resources with local VCPU and MEMORY_MB but rely on a shared resource provider to satisfy a custom resource requirement and verify that the allocation requests returned by AllocationCandidates have the custom resource served up by the shared custom resource provider and VCPU/MEMORY_MB by the compute node providers """ # The aggregate that will be associated to everything... agg_uuid = uuids.agg # Create two compute node providers with VCPU, RAM and NO local # CUSTOM_MAGIC resources, associated with the aggregate. for name in ('cn1', 'cn2'): cn = self._create_provider(name, agg_uuid) tb.add_inventory(cn, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(cn, orc.MEMORY_MB, 1024, min_unit=64, allocation_ratio=1.5) # Create a custom resource called MAGIC magic_rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_MAGIC', ) magic_rc.create() # Create the shared provider that serves CUSTOM_MAGIC, associated with # the same aggregate magic_p = self._create_provider('shared custom resource provider', agg_uuid) tb.add_inventory(magic_p, magic_rc.name, 2048, reserved=1024, min_unit=10) # Mark the magic provider as having inventory shared among any provider # associated via aggregate tb.set_traits(magic_p, "MISC_SHARES_VIA_AGGREGATE") # The resources we will request requested_resources = { orc.VCPU: 1, orc.MEMORY_MB: 64, magic_rc.name: 512, } alloc_cands = self._get_allocation_candidates( groups={'': placement_lib.RequestGroup( use_same_provider=False, resources=requested_resources)}) # Verify the allocation requests that are returned. There should be 2 # allocation requests, one for each compute node, containing 3 # resources in each allocation request, one each for VCPU, RAM, and # MAGIC. The amounts of the requests should correspond to the requested # resource amounts in the filter:resources dict passed to # AllocationCandidates.get_by_requests(). The providers for VCPU and # MEMORY_MB should be the compute nodes while the provider for the # MAGIC should be the shared custom resource provider. expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('shared custom resource provider', magic_rc.name, 512)], [('cn2', orc.VCPU, 1), ('cn2', orc.MEMORY_MB, 64), ('shared custom resource provider', magic_rc.name, 512)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'cn2': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'shared custom resource provider': set([ (magic_rc.name, 1024, 0) ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_mix_local_and_shared(self): # Create three compute node providers with VCPU and RAM, but only # the third compute node has DISK. The first two computes will # share the storage from the shared storage pool. cn1, cn2 = (self._create_provider(name, uuids.agg) for name in ('cn1', 'cn2')) # cn3 is not associated with the aggregate cn3 = self._create_provider('cn3') for cn in (cn1, cn2, cn3): tb.add_inventory(cn, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(cn, orc.MEMORY_MB, 1024, min_unit=64, allocation_ratio=1.5) # Only cn3 has disk tb.add_inventory(cn3, orc.DISK_GB, 2000, reserved=100, min_unit=10) # Create the shared storage pool in the same aggregate as the first two # compute nodes ss = self._create_provider('shared storage', uuids.agg) # Give the shared storage pool some inventory of DISK_GB tb.add_inventory(ss, orc.DISK_GB, 2000, reserved=100, min_unit=10) tb.set_traits(ss, "MISC_SHARES_VIA_AGGREGATE") alloc_cands = self._get_allocation_candidates() # Expect cn1, cn2, cn3 and ss in the summaries expected = { 'cn1': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'cn2': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'cn3': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), (orc.DISK_GB, 2000 - 100, 0), ]), 'shared storage': set([ (orc.DISK_GB, 2000 - 100, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Expect three allocation requests: (cn1, ss), (cn2, ss), (cn3) expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 1), ('cn2', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], [('cn3', orc.VCPU, 1), ('cn3', orc.MEMORY_MB, 64), ('cn3', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) # Now we're going to add a set of required traits into the request mix. # To start off, let's request a required trait that we know has not # been associated yet with any provider, and ensure we get no results alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[{os_traits.HW_CPU_X86_AVX2}], )} ) # We have not yet associated the AVX2 trait to any provider, so we # should get zero allocation candidates p_sums = alloc_cands.provider_summaries self.assertEqual(0, len(p_sums)) a_reqs = alloc_cands.allocation_requests self.assertEqual(0, len(a_reqs)) # Now, if we then associate the required trait with all of our compute # nodes, we should get back all compute nodes since they all now # satisfy the required traits as well as the resource request for cn in (cn1, cn2, cn3): tb.set_traits(cn, os_traits.HW_CPU_X86_AVX2) alloc_cands = self._get_allocation_candidates( groups={'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[{os_traits.HW_CPU_X86_AVX2}], )} ) # There should be 3 compute node providers and 1 shared storage # provider in the summaries. expected = { 'cn1': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'cn2': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), ]), 'cn3': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), (orc.DISK_GB, 2000 - 100, 0), ]), 'shared storage': set([ (orc.DISK_GB, 2000 - 100, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Let's check that the traits listed for the compute nodes include the # AVX2 trait, and the shared storage provider in the provider summaries # does NOT have the AVX2 trait expected = { 'cn1': set(['HW_CPU_X86_AVX2']), 'cn2': set(['HW_CPU_X86_AVX2']), 'cn3': set(['HW_CPU_X86_AVX2']), 'shared storage': set(['MISC_SHARES_VIA_AGGREGATE']), } self._validate_provider_summary_traits(expected, alloc_cands) # Now, let's add a new wrinkle to the equation and add a required trait # that will ONLY be satisfied by a compute node with local disk that # has SSD drives. Set this trait only on the compute node with local # disk (cn3) tb.set_traits(cn3, os_traits.HW_CPU_X86_AVX2, os_traits.STORAGE_DISK_SSD) alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[ {os_traits.HW_CPU_X86_AVX2}, {os_traits.STORAGE_DISK_SSD} ], ) }) # There should be only cn3 in the returned allocation candidates expected = [ [('cn3', orc.VCPU, 1), ('cn3', orc.MEMORY_MB, 64), ('cn3', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn3': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), (orc.DISK_GB, 2000 - 100, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) expected = { 'cn3': set(['HW_CPU_X86_AVX2', 'STORAGE_DISK_SSD']) } self._validate_provider_summary_traits(expected, alloc_cands) # Let's have an even more complex trait query # (AVX2 and (SSD or SSE) and not SSE2). As no SEE or SSE2 is in the # current trees we still get back cn3 that has AVX and SSD required_traits = [ {os_traits.HW_CPU_X86_AVX2}, {os_traits.STORAGE_DISK_SSD, os_traits.HW_CPU_X86_SSE} ] alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, forbidden_traits={os_traits.HW_CPU_X86_SSE2} ) }) # There should be only cn3 in the returned allocation candidates expected = [ [('cn3', orc.VCPU, 1), ('cn3', orc.MEMORY_MB, 64), ('cn3', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn3': set([ (orc.VCPU, 24 * 16.0, 0), (orc.MEMORY_MB, 1024 * 1.5, 0), (orc.DISK_GB, 2000 - 100, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) expected = { 'cn3': set(['HW_CPU_X86_AVX2', 'STORAGE_DISK_SSD']) } self._validate_provider_summary_traits(expected, alloc_cands) # Next we add SSE to cn1 and both SSE and SSE2 to cn2. This will make # cn1 a match while cn2 still be ignored due to SSE2. cn3 is good as # before tb.set_traits( cn1, os_traits.HW_CPU_X86_AVX2, os_traits.HW_CPU_X86_SSE) tb.set_traits( cn2, os_traits.HW_CPU_X86_AVX2, os_traits.HW_CPU_X86_SSE, os_traits.HW_CPU_X86_SSE2 ) alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=required_traits, forbidden_traits={os_traits.HW_CPU_X86_SSE2} ) }) expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('shared storage', orc.DISK_GB, 1500)], [('cn3', orc.VCPU, 1), ('cn3', orc.MEMORY_MB, 64), ('cn3', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) def test_common_rc(self): """Candidates when cn and shared have inventory in the same class.""" cn = self._create_provider('cn', uuids.agg1) tb.add_inventory(cn, orc.VCPU, 24) tb.add_inventory(cn, orc.MEMORY_MB, 2048) tb.add_inventory(cn, orc.DISK_GB, 1600) ss = self._create_provider('ss', uuids.agg1) tb.set_traits(ss, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss, orc.DISK_GB, 2000) alloc_cands = self._get_allocation_candidates() # One allocation_request should have cn + ss; the other should have # just the cn. expected = [ [('cn', orc.VCPU, 1), ('cn', orc.MEMORY_MB, 64), ('cn', orc.DISK_GB, 1500)], [('cn', orc.VCPU, 1), ('cn', orc.MEMORY_MB, 64), ('ss', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), (orc.DISK_GB, 1600, 0), ]), 'ss': set([ (orc.DISK_GB, 2000, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Next let's increase the requested DISK_GB requested_resources = { orc.VCPU: 1, orc.MEMORY_MB: 64, orc.DISK_GB: 1800, } alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=requested_resources, )} ) expected = [ [('cn', orc.VCPU, 1), ('cn', orc.MEMORY_MB, 64), ('ss', orc.DISK_GB, 1800)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), (orc.DISK_GB, 1600, 0), ]), 'ss': set([ (orc.DISK_GB, 2000, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_common_rc_traits_split(self): """Validate filters when traits are split across cn and shared RPs.""" # NOTE(efried): This test case only applies to the scenario where we're # requesting resources via the RequestGroup where # use_same_provider=False cn = self._create_provider('cn', uuids.agg1) tb.add_inventory(cn, orc.VCPU, 24) tb.add_inventory(cn, orc.MEMORY_MB, 2048) tb.add_inventory(cn, orc.DISK_GB, 1600) # The compute node's disk is SSD tb.set_traits(cn, 'HW_CPU_X86_SSE', 'STORAGE_DISK_SSD') ss = self._create_provider('ss', uuids.agg1) tb.add_inventory(ss, orc.DISK_GB, 1600) # The shared storage's disk is RAID tb.set_traits(ss, 'MISC_SHARES_VIA_AGGREGATE', 'CUSTOM_RAID') alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources=self.requested_resources, required_traits=[ {'HW_CPU_X86_SSE'}, {'STORAGE_DISK_SSD'}, {'CUSTOM_RAID'}] )} ) # TODO(efried): Bug #1724633: we'd *like* to get no candidates, because # there's no single DISK_GB resource with both STORAGE_DISK_SSD and # CUSTOM_RAID traits. # expected = [] expected = [ [('cn', orc.VCPU, 1), ('cn', orc.MEMORY_MB, 64), ('ss', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) # expected = {} expected = { 'cn': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), (orc.DISK_GB, 1600, 0), ]), 'ss': set([ (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_only_one_sharing_provider(self): ss1 = self._create_provider('ss1', uuids.agg1) tb.set_traits(ss1, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss1, orc.IPV4_ADDRESS, 24) tb.add_inventory(ss1, orc.SRIOV_NET_VF, 16) tb.add_inventory(ss1, orc.DISK_GB, 1600) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'IPV4_ADDRESS': 2, 'SRIOV_NET_VF': 1, 'DISK_GB': 1500, } )} ) expected = [ [('ss1', orc.IPV4_ADDRESS, 2), ('ss1', orc.SRIOV_NET_VF, 1), ('ss1', orc.DISK_GB, 1500)] ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'ss1': set([ (orc.IPV4_ADDRESS, 24, 0), (orc.SRIOV_NET_VF, 16, 0), (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_all_sharing_providers_no_rc_overlap(self): ss1 = self._create_provider('ss1', uuids.agg1) tb.set_traits(ss1, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss1, orc.IPV4_ADDRESS, 24) ss2 = self._create_provider('ss2', uuids.agg1) tb.set_traits(ss2, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss2, orc.DISK_GB, 1600) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'IPV4_ADDRESS': 2, 'DISK_GB': 1500, } )} ) expected = [ [('ss1', orc.IPV4_ADDRESS, 2), ('ss2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'ss1': set([ (orc.IPV4_ADDRESS, 24, 0), ]), 'ss2': set([ (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_all_sharing_providers_no_rc_overlap_more_classes(self): ss1 = self._create_provider('ss1', uuids.agg1) tb.set_traits(ss1, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss1, orc.IPV4_ADDRESS, 24) tb.add_inventory(ss1, orc.SRIOV_NET_VF, 16) ss2 = self._create_provider('ss2', uuids.agg1) tb.set_traits(ss2, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss2, orc.DISK_GB, 1600) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'IPV4_ADDRESS': 2, 'SRIOV_NET_VF': 1, 'DISK_GB': 1500, } )} ) expected = [ [('ss1', orc.IPV4_ADDRESS, 2), ('ss1', orc.SRIOV_NET_VF, 1), ('ss2', orc.DISK_GB, 1500)] ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'ss1': set([ (orc.IPV4_ADDRESS, 24, 0), (orc.SRIOV_NET_VF, 16, 0) ]), 'ss2': set([ (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_all_sharing_providers(self): ss1 = self._create_provider('ss1', uuids.agg1) tb.set_traits(ss1, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss1, orc.IPV4_ADDRESS, 24) tb.add_inventory(ss1, orc.SRIOV_NET_VF, 16) tb.add_inventory(ss1, orc.DISK_GB, 1600) ss2 = self._create_provider('ss2', uuids.agg1) tb.set_traits(ss2, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss2, orc.SRIOV_NET_VF, 16) tb.add_inventory(ss2, orc.DISK_GB, 1600) alloc_cands = self._get_allocation_candidates(groups={ '': placement_lib.RequestGroup( use_same_provider=False, resources={ 'IPV4_ADDRESS': 2, 'SRIOV_NET_VF': 1, 'DISK_GB': 1500, } )} ) # We expect four candidates: # - gets all the resources from ss1, # - gets the SRIOV_NET_VF from ss2 and the rest from ss1, # - gets the DISK_GB from ss2 and the rest from ss1, # - gets SRIOV_NET_VF and DISK_GB from ss2 and rest from ss1 expected = [ [('ss1', orc.IPV4_ADDRESS, 2), ('ss1', orc.SRIOV_NET_VF, 1), ('ss1', orc.DISK_GB, 1500)], [('ss1', orc.IPV4_ADDRESS, 2), ('ss1', orc.SRIOV_NET_VF, 1), ('ss2', orc.DISK_GB, 1500)], [('ss1', orc.IPV4_ADDRESS, 2), ('ss2', orc.SRIOV_NET_VF, 1), ('ss1', orc.DISK_GB, 1500)], [('ss1', orc.IPV4_ADDRESS, 2), ('ss2', orc.SRIOV_NET_VF, 1), ('ss2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'ss1': set([ (orc.IPV4_ADDRESS, 24, 0), (orc.SRIOV_NET_VF, 16, 0), (orc.DISK_GB, 1600, 0) ]), 'ss2': set([ (orc.SRIOV_NET_VF, 16, 0), (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_two_non_sharing_connect_to_one_sharing_different_aggregate(self): # Covering the following setup: # # CN1 (VCPU) CN2 (VCPU) # \ agg1 / agg2 # SS1 (DISK_GB) # # It is different from test_mix_local_and_shared as it uses two # different aggregates to connect the two CNs to the share RP cn1 = self._create_provider('cn1', uuids.agg1) tb.add_inventory(cn1, orc.VCPU, 24) tb.add_inventory(cn1, orc.MEMORY_MB, 2048) cn2 = self._create_provider('cn2', uuids.agg2) tb.add_inventory(cn2, orc.VCPU, 24) tb.add_inventory(cn2, orc.MEMORY_MB, 2048) ss1 = self._create_provider('ss1', uuids.agg1, uuids.agg2) tb.set_traits(ss1, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss1, orc.DISK_GB, 1600) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'DISK_GB': 1500, } )} ) expected = [ [('cn1', orc.VCPU, 2), ('ss1', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 2), ('ss1', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), ]), 'cn2': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), ]), 'ss1': set([ (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_two_non_sharing_one_common_and_two_unique_sharing(self): # Covering the following setup: # # CN1 (VCPU) CN2 (VCPU) # / agg3 \ agg1 / agg1 \ agg2 # SS3 (IPV4) SS1 (DISK_GB) SS2 (IPV4) cn1 = self._create_provider('cn1', uuids.agg1, uuids.agg3) tb.add_inventory(cn1, orc.VCPU, 24) tb.add_inventory(cn1, orc.MEMORY_MB, 2048) cn2 = self._create_provider('cn2', uuids.agg1, uuids.agg2) tb.add_inventory(cn2, orc.VCPU, 24) tb.add_inventory(cn2, orc.MEMORY_MB, 2048) # ss1 is connected to both cn1 and cn2 ss1 = self._create_provider('ss1', uuids.agg1) tb.set_traits(ss1, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss1, orc.DISK_GB, 1600) # ss2 only connected to cn2 ss2 = self._create_provider('ss2', uuids.agg2) tb.set_traits(ss2, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss2, orc.IPV4_ADDRESS, 24) # ss3 only connected to cn1 ss3 = self._create_provider('ss3', uuids.agg3) tb.set_traits(ss3, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss3, orc.IPV4_ADDRESS, 24) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'DISK_GB': 1500, 'IPV4_ADDRESS': 2, } )} ) expected = [ [('cn1', orc.VCPU, 2), ('ss1', orc.DISK_GB, 1500), ('ss3', orc.IPV4_ADDRESS, 2)], [('cn2', orc.VCPU, 2), ('ss1', orc.DISK_GB, 1500), ('ss2', orc.IPV4_ADDRESS, 2)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), ]), 'cn2': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), ]), 'ss1': set([ (orc.DISK_GB, 1600, 0), ]), 'ss2': set([ (orc.IPV4_ADDRESS, 24, 0), ]), 'ss3': set([ (orc.IPV4_ADDRESS, 24, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_rc_not_split_between_sharing_and_non_sharing(self): # cn1(VCPU,MEM) Non-sharing RP with some of the resources # | agg1 aggregated with # ss1(DISK) sharing RP that has the rest of the resources # # cn2(VCPU) Non-sharing with one of the resources; # / agg2 \ aggregated with multiple sharing providers # ss2_1(MEM) ss2_2(DISK) with different resources. cn1 = self._create_provider('cn1', uuids.agg1) tb.add_inventory(cn1, orc.VCPU, 24) tb.add_inventory(cn1, orc.MEMORY_MB, 2048) ss1 = self._create_provider('ss1', uuids.agg1) tb.add_inventory(ss1, orc.DISK_GB, 2000) tb.set_traits(ss1, 'MISC_SHARES_VIA_AGGREGATE') cn2 = self._create_provider('cn2', uuids.agg2) tb.add_inventory(cn2, orc.VCPU, 24) ss2_1 = self._create_provider('ss2_1', uuids.agg2) tb.add_inventory(ss2_1, orc.MEMORY_MB, 2048) tb.set_traits(ss2_1, 'MISC_SHARES_VIA_AGGREGATE') ss2_2 = self._create_provider('ss2_2', uuids.agg2) tb.add_inventory(ss2_2, orc.DISK_GB, 2000) tb.set_traits(ss2_2, 'MISC_SHARES_VIA_AGGREGATE') alloc_cands = self._get_allocation_candidates() expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('ss1', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 1), ('ss2_1', orc.MEMORY_MB, 64), ('ss2_2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), ]), 'ss1': set([ (orc.DISK_GB, 2000, 0), ]), 'cn2': set([ (orc.VCPU, 24, 0), ]), 'ss2_1': set([ (orc.MEMORY_MB, 2048, 0), ]), 'ss2_2': set([ (orc.DISK_GB, 2000, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_multiple_sharing_providers_with_same_rc(self): # cn1(VCPU,MEM) Non-sharing with some of the resources; # / agg1 \ aggregated with multiple sharing providers # ss1_1(DISK) ss1_2(DISK) with the same resource. # # cn2(VCPU) Non-sharing with one of the resources; # / agg2 \ aggregated with multiple sharing providers # ss2_1(MEM) ss2_2(DISK) with different resources. cn1 = self._create_provider('cn1', uuids.agg1) tb.add_inventory(cn1, orc.VCPU, 24) tb.add_inventory(cn1, orc.MEMORY_MB, 2048) ss1_1 = self._create_provider('ss1_1', uuids.agg1) tb.add_inventory(ss1_1, orc.DISK_GB, 2000) tb.set_traits(ss1_1, 'MISC_SHARES_VIA_AGGREGATE') ss1_2 = self._create_provider('ss1_2', uuids.agg1) tb.add_inventory(ss1_2, orc.DISK_GB, 2000) tb.set_traits(ss1_2, 'MISC_SHARES_VIA_AGGREGATE') cn2 = self._create_provider('cn2', uuids.agg2) tb.add_inventory(cn2, orc.VCPU, 24) ss2_1 = self._create_provider('ss2_1', uuids.agg2) tb.add_inventory(ss2_1, orc.MEMORY_MB, 2048) tb.set_traits(ss2_1, 'MISC_SHARES_VIA_AGGREGATE') ss2_2 = self._create_provider('ss2_2', uuids.agg2) tb.add_inventory(ss2_2, orc.DISK_GB, 2000) tb.set_traits(ss2_2, 'MISC_SHARES_VIA_AGGREGATE') alloc_cands = self._get_allocation_candidates() expected = [ [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('ss1_1', orc.DISK_GB, 1500)], [('cn1', orc.VCPU, 1), ('cn1', orc.MEMORY_MB, 64), ('ss1_2', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 1), ('ss2_1', orc.MEMORY_MB, 64), ('ss2_2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), ]), 'ss1_1': set([ (orc.DISK_GB, 2000, 0), ]), 'ss1_2': set([ (orc.DISK_GB, 2000, 0), ]), 'cn2': set([ (orc.VCPU, 24, 0), ]), 'ss2_1': set([ (orc.MEMORY_MB, 2048, 0), ]), 'ss2_2': set([ (orc.DISK_GB, 2000, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_sharing_providers_member_of(self): # Covering the following setup: # # CN1 (VCPU, DISK_GB) CN2 (VCPU, DISK_GB) # / agg1 \ agg2 / agg2 \ agg3 # SS1 (DISK_GB) SS2 (DISK_GB) SS3 (DISK_GB) cn1 = self._create_provider('cn1', uuids.agg1, uuids.agg2) tb.add_inventory(cn1, orc.VCPU, 24) tb.add_inventory(cn1, orc.DISK_GB, 1600) cn2 = self._create_provider('cn2', uuids.agg2, uuids.agg3) tb.add_inventory(cn2, orc.VCPU, 24) tb.add_inventory(cn2, orc.DISK_GB, 1600) # ss1 is connected to cn1 ss1 = self._create_provider('ss1', uuids.agg1) tb.set_traits(ss1, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss1, orc.DISK_GB, 1600) # ss2 is connected to both cn1 and cn2 ss2 = self._create_provider('ss2', uuids.agg2) tb.set_traits(ss2, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss2, orc.DISK_GB, 1600) # ss3 is connected to cn2 ss3 = self._create_provider('ss3', uuids.agg3) tb.set_traits(ss3, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss3, orc.DISK_GB, 1600) # Let's get allocation candidates from agg1 alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'DISK_GB': 1500, }, member_of=[[uuids.agg1]] )} ) expected = [ [('cn1', orc.VCPU, 2), ('cn1', orc.DISK_GB, 1500)], [('cn1', orc.VCPU, 2), ('ss1', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24, 0), (orc.DISK_GB, 1600, 0), ]), 'ss1': set([ (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Let's get allocation candidates from agg2 alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'DISK_GB': 1500, }, member_of=[[uuids.agg2]] )} ) expected = [ [('cn1', orc.VCPU, 2), ('cn1', orc.DISK_GB, 1500)], [('cn1', orc.VCPU, 2), ('ss2', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 2), ('cn2', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 2), ('ss2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24, 0), (orc.DISK_GB, 1600, 0), ]), 'cn2': set([ (orc.VCPU, 24, 0), (orc.DISK_GB, 1600, 0), ]), 'ss2': set([ (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Let's move to validate multiple member_of scenario # The request from agg1 *AND* agg2 would provide only # resources from cn1 with its local DISK alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'DISK_GB': 1500, }, member_of=[[uuids.agg1], [uuids.agg2]] )} ) expected = [ [('cn1', orc.VCPU, 2), ('cn1', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24, 0), (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) # The request from agg1 *OR* agg2 would provide five candidates alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'DISK_GB': 1500, }, member_of=[[uuids.agg1, uuids.agg2]] )} ) expected = [ [('cn1', orc.VCPU, 2), ('cn1', orc.DISK_GB, 1500)], [('cn1', orc.VCPU, 2), ('ss1', orc.DISK_GB, 1500)], [('cn1', orc.VCPU, 2), ('ss2', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 2), ('cn2', orc.DISK_GB, 1500)], [('cn2', orc.VCPU, 2), ('ss2', orc.DISK_GB, 1500)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24, 0), (orc.DISK_GB, 1600, 0), ]), 'cn2': set([ (orc.VCPU, 24, 0), (orc.DISK_GB, 1600, 0), ]), 'ss1': set([ (orc.DISK_GB, 1600, 0), ]), 'ss2': set([ (orc.DISK_GB, 1600, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_two_sharing_indirectly_connected_connecting_not_give_resource( self): # This covers the following setup # CN1 (VCPU, MEMORY_MB) # / \ # /agg1 \agg2 # / \ # SS1 ( SS2 ( # DISK_GB) IPV4_ADDRESS # SRIOV_NET_VF) # The request then made for resources from the sharing RPs only ss1 = self._create_provider('ss1', uuids.agg1) tb.set_traits(ss1, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss1, orc.DISK_GB, 1600) cn1 = self._create_provider('cn1', uuids.agg1, uuids.agg2) tb.add_inventory(cn1, orc.VCPU, 24) tb.add_inventory(cn1, orc.MEMORY_MB, 2048) ss2 = self._create_provider('ss2', uuids.agg2) tb.set_traits(ss2, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss2, orc.IPV4_ADDRESS, 24) tb.add_inventory(ss2, orc.SRIOV_NET_VF, 16) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'IPV4_ADDRESS': 2, 'SRIOV_NET_VF': 1, 'DISK_GB': 1500, } )} ) expected = [ [('ss1', orc.DISK_GB, 1500), ('ss2', orc.IPV4_ADDRESS, 2), ('ss2', orc.SRIOV_NET_VF, 1)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'ss1': set([ (orc.DISK_GB, 1600, 0), ]), 'ss2': set([ (orc.IPV4_ADDRESS, 24, 0), (orc.SRIOV_NET_VF, 16, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_two_sharing_indirectly_connected_connecting_gives_resource(self): # This covers the following setup # CN1 (VCPU, MEMORY_MB) # / \ # /agg1 \agg2 # / \ # SS1 ( SS2 ( # DISK_GB) IPV4_ADDRESS # SRIOV_NET_VF) # The request then made for resources from all three RPs ss1 = self._create_provider('ss1', uuids.agg1) tb.set_traits(ss1, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss1, orc.DISK_GB, 1600) cn1 = self._create_provider('cn1', uuids.agg1, uuids.agg2) tb.add_inventory(cn1, orc.VCPU, 24) tb.add_inventory(cn1, orc.MEMORY_MB, 2048) ss2 = self._create_provider('ss2', uuids.agg2) tb.set_traits(ss2, "MISC_SHARES_VIA_AGGREGATE") tb.add_inventory(ss2, orc.IPV4_ADDRESS, 24) tb.add_inventory(ss2, orc.SRIOV_NET_VF, 16) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'IPV4_ADDRESS': 2, 'SRIOV_NET_VF': 1, 'DISK_GB': 1500, } )} ) expected = [ [('cn1', orc.VCPU, 2), ('ss1', orc.DISK_GB, 1500), ('ss2', orc.IPV4_ADDRESS, 2), ('ss2', orc.SRIOV_NET_VF, 1)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 24, 0), (orc.MEMORY_MB, 2048, 0), ]), 'ss1': set([ (orc.DISK_GB, 1600, 0), ]), 'ss2': set([ (orc.IPV4_ADDRESS, 24, 0), (orc.SRIOV_NET_VF, 16, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) def test_simple_tree_of_providers(self): """Tests that we properly winnow allocation requests when including traits in the request group and that the traits appear in the provider summaries of the returned allocation candidates """ # We are setting up a single tree that looks like this: # # compute node (cn) # / \ # / \ # numa cell 0 numa cell 1 # | | # | | # pf 0 pf 1 # # The second physical function will be associated with the # HW_NIC_OFFLOAD_GENEVE trait, but not the first physical function. # # We will issue a request to _get_allocation_candidates() for VCPU, # MEMORY_MB and SRIOV_NET_VF **without** required traits, then include # a request that includes HW_NIC_OFFLOAD_GENEVE. In the latter case, # the compute node tree should be returned but the allocation requests # should only include the second physical function since the required # trait is only associated with that PF. # # Subsequently, we will consume all the SRIOV_NET_VF resources from the # second PF's inventory and attempt the same request of resources and # HW_NIC_OFFLOAD_GENEVE. We should get 0 returned results because now # the only PF that has the required trait has no inventory left. cn = self._create_provider('cn') tb.add_inventory(cn, orc.VCPU, 16) tb.add_inventory(cn, orc.MEMORY_MB, 32768) numa_cell0 = self._create_provider('cn_numa0', parent=cn.uuid) numa_cell1 = self._create_provider('cn_numa1', parent=cn.uuid) pf0 = self._create_provider('cn_numa0_pf0', parent=numa_cell0.uuid) tb.add_inventory(pf0, orc.SRIOV_NET_VF, 8) pf1 = self._create_provider('cn_numa1_pf1', parent=numa_cell1.uuid) tb.add_inventory(pf1, orc.SRIOV_NET_VF, 8) tb.set_traits(pf1, os_traits.HW_NIC_OFFLOAD_GENEVE) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 2, orc.MEMORY_MB: 256, orc.SRIOV_NET_VF: 1, } )} ) expected = [ [('cn', orc.VCPU, 2), ('cn', orc.MEMORY_MB, 256), ('cn_numa0_pf0', orc.SRIOV_NET_VF, 1)], [('cn', orc.VCPU, 2), ('cn', orc.MEMORY_MB, 256), ('cn_numa1_pf1', orc.SRIOV_NET_VF, 1)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn': set([ (orc.VCPU, 16, 0), (orc.MEMORY_MB, 32768, 0), ]), 'cn_numa0': set([]), 'cn_numa1': set([]), 'cn_numa0_pf0': set([ (orc.SRIOV_NET_VF, 8, 0), ]), 'cn_numa1_pf1': set([ (orc.SRIOV_NET_VF, 8, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) expected = { 'cn': set([]), 'cn_numa0': set([]), 'cn_numa1': set([]), 'cn_numa0_pf0': set([]), 'cn_numa1_pf1': set([os_traits.HW_NIC_OFFLOAD_GENEVE]), } self._validate_provider_summary_traits(expected, alloc_cands) # Now add required traits to the mix and verify we still get the same # result (since we haven't yet consumed the second physical function's # inventory of SRIOV_NET_VF. alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 2, orc.MEMORY_MB: 256, orc.SRIOV_NET_VF: 1, }, required_traits=[{os_traits.HW_NIC_OFFLOAD_GENEVE}], )} ) expected = [ [('cn', orc.VCPU, 2), ('cn', orc.MEMORY_MB, 256), ('cn_numa1_pf1', orc.SRIOV_NET_VF, 1)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn': set([ (orc.VCPU, 16, 0), (orc.MEMORY_MB, 32768, 0), ]), 'cn_numa0': set([]), 'cn_numa1': set([]), 'cn_numa0_pf0': set([ (orc.SRIOV_NET_VF, 8, 0), ]), 'cn_numa1_pf1': set([ (orc.SRIOV_NET_VF, 8, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) expected = { 'cn': set([]), 'cn_numa0': set([]), 'cn_numa1': set([]), 'cn_numa0_pf0': set([]), 'cn_numa1_pf1': set([os_traits.HW_NIC_OFFLOAD_GENEVE]), } self._validate_provider_summary_traits(expected, alloc_cands) # Next we test that we get resources only on non-root providers # without root providers involved alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.SRIOV_NET_VF: 1, }, )} ) expected = [ [('cn_numa0_pf0', orc.SRIOV_NET_VF, 1)], [('cn_numa1_pf1', orc.SRIOV_NET_VF, 1)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn': set([ (orc.VCPU, 16, 0), (orc.MEMORY_MB, 32768, 0), ]), 'cn_numa0': set([]), 'cn_numa1': set([]), 'cn_numa0_pf0': set([ (orc.SRIOV_NET_VF, 8, 0), ]), 'cn_numa1_pf1': set([ (orc.SRIOV_NET_VF, 8, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) expected = { 'cn': set([]), 'cn_numa0': set([]), 'cn_numa1': set([]), 'cn_numa0_pf0': set([]), 'cn_numa1_pf1': set([os_traits.HW_NIC_OFFLOAD_GENEVE]), } self._validate_provider_summary_traits(expected, alloc_cands) # Same, but with the request in a granular group, which hits a # different code path. alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=True, resources={ orc.SRIOV_NET_VF: 1, }, )} ) expected = [ [('cn_numa0_pf0', orc.SRIOV_NET_VF, 1)], [('cn_numa1_pf1', orc.SRIOV_NET_VF, 1)], ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn': set([ (orc.VCPU, 16, 0), (orc.MEMORY_MB, 32768, 0), ]), 'cn_numa0': set([]), 'cn_numa1': set([]), 'cn_numa0_pf0': set([ (orc.SRIOV_NET_VF, 8, 0), ]), 'cn_numa1_pf1': set([ (orc.SRIOV_NET_VF, 8, 0), ]), } self._validate_provider_summary_resources(expected, alloc_cands) expected = { 'cn': set([]), 'cn_numa0': set([]), 'cn_numa1': set([]), 'cn_numa0_pf0': set([]), 'cn_numa1_pf1': set([os_traits.HW_NIC_OFFLOAD_GENEVE]), } self._validate_provider_summary_traits(expected, alloc_cands) # Now consume all the inventory of SRIOV_NET_VF on the second physical # function (the one with HW_NIC_OFFLOAD_GENEVE associated with it) and # verify that the same request still results in 0 results since the # function with the required trait no longer has any inventory. self.allocate_from_provider(pf1, orc.SRIOV_NET_VF, 8) alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 2, orc.MEMORY_MB: 256, orc.SRIOV_NET_VF: 1, }, required_traits=[{os_traits.HW_NIC_OFFLOAD_GENEVE}], ) }) self._validate_allocation_requests([], alloc_cands) self._validate_provider_summary_resources({}, alloc_cands) self._validate_provider_summary_traits({}, alloc_cands) def test_forbidden_trait_in_unnamed_group_with_split_rcs_on_nested_tree( self ): """Using the following trees: cn1 VCPU=2 | cn1_c1 SRIOV_NET_VF=2, CUSTOM_FOO cn2 VCPU=2 | cn2_c1 SRIOV_NET_VF=2 """ cn1 = self._create_provider('cn1') tb.add_inventory(cn1, orc.VCPU, 2) cn1_c1 = self._create_provider('cn1_c1', parent=cn1.uuid) tb.add_inventory(cn1_c1, orc.SRIOV_NET_VF, 2) tb.set_traits(cn1_c1, 'CUSTOM_FOO') cn2 = self._create_provider('cn2') tb.add_inventory(cn2, orc.VCPU, 2) cn2_c1 = self._create_provider('cn2_c1', parent=cn2.uuid) tb.add_inventory(cn2_c1, orc.SRIOV_NET_VF, 2) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 1, orc.SRIOV_NET_VF: 1, }, forbidden_traits={ 'CUSTOM_FOO', }, )} ) # the tree rooted at CN1 is expected to be filtered out due to # forbidden trait on CN1_C1 # CN2 tree is the same as CN1 but without the forbidden trait so that # is a match expected = [ [('cn2', 'VCPU', 1), ('cn2_c1', 'SRIOV_NET_VF', 1)] ] self._validate_allocation_requests(expected, alloc_cands) def test_forbidden_trait_in_unnamed_group_in_nested_tree(self): """Using the following trees: cn1 VCPU=2 | cn1_c1 SRIOV_NET_VF=2, CUSTOM_FOO cn2 VCPU=2 | cn2_c1 SRIOV_NET_VF=2 """ cn1 = self._create_provider('cn1') tb.add_inventory(cn1, orc.VCPU, 2) cn1_c1 = self._create_provider('cn1_c1', parent=cn1.uuid) tb.add_inventory(cn1_c1, orc.SRIOV_NET_VF, 2) tb.set_traits(cn1_c1, 'CUSTOM_FOO') cn2 = self._create_provider('cn2') tb.add_inventory(cn2, orc.VCPU, 2) cn2_c1 = self._create_provider('cn2_c1', parent=cn2.uuid) tb.add_inventory(cn2_c1, orc.SRIOV_NET_VF, 2) alloc_cands = self._get_allocation_candidates( {'': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 1, }, forbidden_traits={ 'CUSTOM_FOO', }, )} ) # both CN1 and CN2 are returned. CN1 has the forbidden trait # in its tree but there is no RC requested from that RP providing the # forbidden trait. The general rule is # "traits on resource providers never span other resource providers." # See # https://docs.openstack.org/placement/latest/user/provider-tree.html#filtering-by-traits expected = [ [('cn1', 'VCPU', 1)], [('cn2', 'VCPU', 1)] ] self._validate_allocation_requests(expected, alloc_cands) def test_simple_tree_with_shared_provider(self): """Tests that we properly winnow allocation requests when including shared and nested providers """ # We are setting up 2 cn trees with 2 shared storages # that look like this: # # compute node (cn1) ----- shared storage (ss1) # / \ agg1 with 2000 DISK_GB # / \ # numa cell 1_0 numa cell 1_1 # | | # | | # pf 1_0 pf 1_1(HW_NIC_OFFLOAD_GENEVE) # # compute node (cn2) ----- shared storage (ss2) # / \ agg2 with 1000 DISK_GB # / \ # numa cell 2_0 numa cell 2_1 # | | # | | # pf 2_0 pf 2_1(HW_NIC_OFFLOAD_GENEVE) # # The second physical function in both trees (pf1_1, pf 2_1) will be # associated with the HW_NIC_OFFLOAD_GENEVE trait, but not the first # physical function. # # We will issue a request to _get_allocation_candidates() for VCPU, # SRIOV_NET_VF and DISK_GB **without** required traits, then include # a request that includes HW_NIC_OFFLOAD_GENEVE. In the latter case, # the compute node tree should be returned but the allocation requests # should only include the second physical function since the required # trait is only associated with that PF. cn1 = self._create_provider('cn1', uuids.agg1) cn2 = self._create_provider('cn2', uuids.agg2) tb.add_inventory(cn1, orc.VCPU, 16) tb.add_inventory(cn2, orc.VCPU, 16) numa1_0 = self._create_provider('cn1_numa0', parent=cn1.uuid) numa1_1 = self._create_provider('cn1_numa1', parent=cn1.uuid) numa2_0 = self._create_provider('cn2_numa0', parent=cn2.uuid) numa2_1 = self._create_provider('cn2_numa1', parent=cn2.uuid) pf1_0 = self._create_provider('cn1_numa0_pf0', parent=numa1_0.uuid) pf1_1 = self._create_provider('cn1_numa1_pf1', parent=numa1_1.uuid) pf2_0 = self._create_provider('cn2_numa0_pf0', parent=numa2_0.uuid) pf2_1 = self._create_provider('cn2_numa1_pf1', parent=numa2_1.uuid) tb.add_inventory(pf1_0, orc.SRIOV_NET_VF, 8) tb.add_inventory(pf1_1, orc.SRIOV_NET_VF, 8) tb.add_inventory(pf2_0, orc.SRIOV_NET_VF, 8) tb.add_inventory(pf2_1, orc.SRIOV_NET_VF, 8) tb.set_traits(pf2_1, os_traits.HW_NIC_OFFLOAD_GENEVE) tb.set_traits(pf1_1, os_traits.HW_NIC_OFFLOAD_GENEVE) ss1 = self._create_provider('ss1', uuids.agg1) ss2 = self._create_provider('ss2', uuids.agg2) tb.add_inventory(ss1, orc.DISK_GB, 2000) tb.add_inventory(ss2, orc.DISK_GB, 1000) tb.set_traits(ss1, 'MISC_SHARES_VIA_AGGREGATE') tb.set_traits(ss2, 'MISC_SHARES_VIA_AGGREGATE') alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 2, orc.SRIOV_NET_VF: 1, orc.DISK_GB: 1500, }) }) # cn2 is not in the allocation candidates because it doesn't have # enough DISK_GB resource with shared providers. expected = [ [('cn1', orc.VCPU, 2), ('cn1_numa0_pf0', orc.SRIOV_NET_VF, 1), ('ss1', orc.DISK_GB, 1500)], [('cn1', orc.VCPU, 2), ('cn1_numa1_pf1', orc.SRIOV_NET_VF, 1), ('ss1', orc.DISK_GB, 1500)] ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 16, 0) ]), 'cn1_numa0': set([]), 'cn1_numa1': set([]), 'cn1_numa0_pf0': set([ (orc.SRIOV_NET_VF, 8, 0) ]), 'cn1_numa1_pf1': set([ (orc.SRIOV_NET_VF, 8, 0) ]), 'ss1': set([ (orc.DISK_GB, 2000, 0) ]), } self._validate_provider_summary_resources(expected, alloc_cands) # Now add required traits to the mix and verify we still get the # inventory of SRIOV_NET_VF. alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 2, orc.SRIOV_NET_VF: 1, orc.DISK_GB: 1500, }, required_traits=[{os_traits.HW_NIC_OFFLOAD_GENEVE}]) }) # cn1_numa0_pf0 is not in the allocation candidates because it # doesn't have the required trait. expected = [ [('cn1', orc.VCPU, 2), ('cn1_numa1_pf1', orc.SRIOV_NET_VF, 1), ('ss1', orc.DISK_GB, 1500)] ] self._validate_allocation_requests(expected, alloc_cands) expected = { 'cn1': set([ (orc.VCPU, 16, 0) ]), 'cn1_numa0': set([]), 'cn1_numa1': set([]), 'cn1_numa0_pf0': set([ (orc.SRIOV_NET_VF, 8, 0) ]), 'cn1_numa1_pf1': set([ (orc.SRIOV_NET_VF, 8, 0) ]), 'ss1': set([ (orc.DISK_GB, 2000, 0) ]), } self._validate_provider_summary_resources(expected, alloc_cands) def _create_nested_trees(self): # We are setting up 2 identical compute trees with no storage # that look like this: # # compute node (cn1) # / \ # / \ # numa cell 1_0 numa cell 1_1 # | | # | | # pf 1_0 pf 1_1 # # compute node (cn2) # / \ # / \ # numa cell 2_0 numa cell 2_1 # | | # | | # pf 2_0 pf 2_1 # cn1 = self._create_provider('cn1', uuids.agg1) cn2 = self._create_provider('cn2', uuids.agg2) tb.add_inventory(cn1, orc.VCPU, 16) tb.add_inventory(cn2, orc.VCPU, 16) numa1_0 = self._create_provider('cn1_numa0', parent=cn1.uuid) numa1_1 = self._create_provider('cn1_numa1', parent=cn1.uuid) numa2_0 = self._create_provider('cn2_numa0', parent=cn2.uuid) numa2_1 = self._create_provider('cn2_numa1', parent=cn2.uuid) pf1_0 = self._create_provider('cn1_numa0_pf0', parent=numa1_0.uuid) pf1_1 = self._create_provider('cn1_numa1_pf1', parent=numa1_1.uuid) pf2_0 = self._create_provider('cn2_numa0_pf0', parent=numa2_0.uuid) pf2_1 = self._create_provider('cn2_numa1_pf1', parent=numa2_1.uuid) tb.add_inventory(pf1_0, orc.SRIOV_NET_VF, 8) tb.add_inventory(pf1_1, orc.SRIOV_NET_VF, 8) tb.add_inventory(pf2_0, orc.SRIOV_NET_VF, 8) tb.add_inventory(pf2_1, orc.SRIOV_NET_VF, 8) def test_nested_result_count_none(self): """Tests that we properly winnow allocation requests when including nested providers from different request groups with group policy none. """ self._create_nested_trees() # Make a granular request to check count of results. alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 2, }), '_NET1': placement_lib.RequestGroup( use_same_provider=True, resources={ orc.SRIOV_NET_VF: 1, }), '_NET2': placement_lib.RequestGroup( use_same_provider=True, resources={ orc.SRIOV_NET_VF: 1, }), }, rqparams=placement_lib.RequestWideParams(group_policy='none')) # 4 VF providers each providing 2, 1, or 0 inventory makes 6 # different combinations, plus two more that are effectively # the same but satisfying different suffix mappings. self.assertEqual(8, len(alloc_cands.allocation_requests)) def test_nested_result_count_different_amounts_isolate(self): """Tests that we properly winnow allocation requests when including nested providers from different request groups, with different requested amounts. """ self._create_nested_trees() # Make a granular request to check count of results. alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 2, }), '_NET1': placement_lib.RequestGroup( use_same_provider=True, resources={ orc.SRIOV_NET_VF: 1, }), '_NET2': placement_lib.RequestGroup( use_same_provider=True, resources={ orc.SRIOV_NET_VF: 2, }), }, rqparams=placement_lib.RequestWideParams(group_policy='isolate')) self.assertEqual(4, len(alloc_cands.allocation_requests)) def test_nested_result_suffix_mappings(self): """Confirm that paying attention to suffix mappings expands the quantity of results and confirm those results. """ self._create_nested_trees() # Make a granular request to check count and suffixes of results. alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 2, }), '_NET1': placement_lib.RequestGroup( use_same_provider=True, resources={ orc.SRIOV_NET_VF: 1, }), '_NET2': placement_lib.RequestGroup( use_same_provider=True, resources={ orc.SRIOV_NET_VF: 1, }), }, rqparams=placement_lib.RequestWideParams(group_policy='isolate')) expected = [ [('cn1', orc.VCPU, 2, ['']), ('cn1_numa0_pf0', orc.SRIOV_NET_VF, 1, ['_NET1']), ('cn1_numa1_pf1', orc.SRIOV_NET_VF, 1, ['_NET2'])], [('cn1', orc.VCPU, 2, ['']), ('cn1_numa0_pf0', orc.SRIOV_NET_VF, 1, ['_NET2']), ('cn1_numa1_pf1', orc.SRIOV_NET_VF, 1, ['_NET1'])], [('cn2', orc.VCPU, 2, ['']), ('cn2_numa0_pf0', orc.SRIOV_NET_VF, 1, ['_NET1']), ('cn2_numa1_pf1', orc.SRIOV_NET_VF, 1, ['_NET2'])], [('cn2', orc.VCPU, 2, ['']), ('cn2_numa0_pf0', orc.SRIOV_NET_VF, 1, ['_NET2']), ('cn2_numa1_pf1', orc.SRIOV_NET_VF, 1, ['_NET1'])], ] # Near the end of _merge candidates we expect 4 different collections # of AllocationRequest to attempt to be added to a set. Admittance is # controlled by the __hash__ and __eq__ of the AllocationRequest which, # in this case, should keep the results at 4 since they are defined to # be different when they have different suffixes even if they have the # same resource provider, the same resource class and the same desired # amount. self.assertEqual(4, len(alloc_cands.allocation_requests)) self._validate_allocation_requests( expected, alloc_cands, expect_suffixes=True) def test_nested_result_suffix_mappings_non_isolated(self): """Confirm that paying attention to suffix mappings expands the quantity of results and confirm those results. """ self._create_nested_trees() # Make a granular request to check count and suffixes of results. alloc_cands = self._get_allocation_candidates({ '': placement_lib.RequestGroup( use_same_provider=False, resources={ orc.VCPU: 2, }), '_NET1': placement_lib.RequestGroup( use_same_provider=True, resources={ orc.SRIOV_NET_VF: 1, }), '_NET2': placement_lib.RequestGroup( use_same_provider=True, resources={ orc.SRIOV_NET_VF: 1, }), }, rqparams=placement_lib.RequestWideParams(group_policy='none')) # We get four candidates from each compute node: # [A] Two where one VF comes from each PF+RequestGroup combination. # [B] Two where both VFs come from the same PF (which satisfies both # RequestGroupZ). expected = [ # [A] (cn1) [('cn1', orc.VCPU, 2, ['']), ('cn1_numa0_pf0', orc.SRIOV_NET_VF, 1, ['_NET1']), ('cn1_numa1_pf1', orc.SRIOV_NET_VF, 1, ['_NET2'])], [('cn1', orc.VCPU, 2, ['']), ('cn1_numa0_pf0', orc.SRIOV_NET_VF, 1, ['_NET2']), ('cn1_numa1_pf1', orc.SRIOV_NET_VF, 1, ['_NET1'])], # [B] (cn1) [('cn1', orc.VCPU, 2, ['']), ('cn1_numa0_pf0', orc.SRIOV_NET_VF, 2, ['_NET1', '_NET2'])], [('cn1', orc.VCPU, 2, ['']), ('cn1_numa1_pf1', orc.SRIOV_NET_VF, 2, ['_NET1', '_NET2'])], # [A] (cn2) [('cn2', orc.VCPU, 2, ['']), ('cn2_numa0_pf0', orc.SRIOV_NET_VF, 1, ['_NET1']), ('cn2_numa1_pf1', orc.SRIOV_NET_VF, 1, ['_NET2'])], [('cn2', orc.VCPU, 2, ['']), ('cn2_numa0_pf0', orc.SRIOV_NET_VF, 1, ['_NET2']), ('cn2_numa1_pf1', orc.SRIOV_NET_VF, 1, ['_NET1'])], # [B] (cn2) [('cn2', orc.VCPU, 2, ['']), ('cn2_numa0_pf0', orc.SRIOV_NET_VF, 2, ['_NET1', '_NET2'])], [('cn2', orc.VCPU, 2, ['']), ('cn2_numa1_pf1', orc.SRIOV_NET_VF, 2, ['_NET1', '_NET2'])], ] self.assertEqual(8, len(alloc_cands.allocation_requests)) self._validate_allocation_requests( expected, alloc_cands, expect_suffixes=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_attribute_cache.py0000664000175000017500000001257000000000000030242 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from unittest import mock from oslo_utils import timeutils from placement import attribute_cache from placement import exception from placement.tests.functional import base class TestAttributeCache(base.TestCase): def test_no_super_instance(self): """Test that we can't create an _AttributeCache.""" exc = self.assertRaises( AssertionError, attribute_cache._AttributeCache, self.context) self.assertIn('_table must be defined', str(exc)) class TestResourceClassCache(base.TestCase): def test_rc_cache_std_db(self): """Test that looking up either an ID or a string in the resource class cache for a standardized resource class doesn't result in a DB call once the cache is initialized """ cache = attribute_cache.ResourceClassCache(self.context) cache._refresh_from_db(self.context) with mock.patch('sqlalchemy.select') as sel_mock: self.assertEqual('VCPU', cache.string_from_id(0)) self.assertEqual('MEMORY_MB', cache.string_from_id(1)) self.assertEqual(0, cache.id_from_string('VCPU')) self.assertEqual(1, cache.id_from_string('MEMORY_MB')) self.assertFalse(sel_mock.called) def test_standard_has_time_fields(self): cache = attribute_cache.ResourceClassCache(self.context) vcpu_class = dict(cache.all_from_string('VCPU')._mapping) expected = {'id': 0, 'name': 'VCPU', 'updated_at': None, 'created_at': None} expected_fields = sorted(expected.keys()) self.assertEqual(expected_fields, sorted(vcpu_class.keys())) self.assertEqual(0, vcpu_class['id']) self.assertEqual('VCPU', vcpu_class['name']) def test_rc_cache_custom(self): """Test that non-standard, custom resource classes hit the database and return appropriate results, caching the results after a single query. """ cache = attribute_cache.ResourceClassCache(self.context) # Haven't added anything to the DB yet, so should raise # ResourceClassNotFound self.assertRaises(exception.ResourceClassNotFound, cache.string_from_id, 1001) self.assertRaises(exception.ResourceClassNotFound, cache.id_from_string, "IRON_NFV") # Now add to the database and verify appropriate results... with self.placement_db.get_engine().connect() as conn: ins_stmt = attribute_cache._RC_TBL.insert().values( id=1001, name='IRON_NFV' ) with conn.begin(): conn.execute(ins_stmt) self.assertEqual('IRON_NFV', cache.string_from_id(1001)) self.assertEqual(1001, cache.id_from_string('IRON_NFV')) # Try same again and verify we don't hit the DB. with mock.patch('sqlalchemy.select') as sel_mock: self.assertEqual('IRON_NFV', cache.string_from_id(1001)) self.assertEqual(1001, cache.id_from_string('IRON_NFV')) self.assertFalse(sel_mock.called) # Verify all fields available from all_from_string iron_nfv_class = cache.all_from_string('IRON_NFV') self.assertEqual(1001, iron_nfv_class.id) self.assertEqual('IRON_NFV', iron_nfv_class.name) # updated_at not set on insert self.assertIsNone(iron_nfv_class.updated_at) self.assertIsInstance(iron_nfv_class.created_at, datetime.datetime) # Update IRON_NFV (this is a no-op but will set updated_at) with self.placement_db.get_engine().connect() as conn: # NOTE(cdent): When using explict SQL that names columns, # the automatic timestamp handling provided by the oslo_db # TimestampMixin is not provided. created_at is a default # but updated_at is an onupdate. upd_stmt = attribute_cache._RC_TBL.update().where( attribute_cache._RC_TBL.c.id == 1001).values( name='IRON_NFV', updated_at=timeutils.utcnow()) with conn.begin(): conn.execute(upd_stmt) # reset cache cache = attribute_cache.ResourceClassCache(self.context) iron_nfv_class = cache.all_from_string('IRON_NFV') # updated_at set on update self.assertIsInstance(iron_nfv_class.updated_at, datetime.datetime) def test_rc_cache_miss(self): """Test that we raise ResourceClassNotFound if an unknown resource class ID or string is searched for. """ cache = attribute_cache.ResourceClassCache(self.context) self.assertRaises(exception.ResourceClassNotFound, cache.string_from_id, 99999999) self.assertRaises(exception.ResourceClassNotFound, cache.id_from_string, 'UNKNOWN') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_base.py0000664000175000017500000001456200000000000026031 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Base class and convenience utilities for functional placement tests.""" import copy import os_resource_classes as orc from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from placement import exception from placement.objects import allocation as alloc_obj from placement.objects import consumer as consumer_obj from placement.objects import inventory as inv_obj from placement.objects import project as project_obj from placement.objects import resource_class as rc_obj from placement.objects import resource_provider as rp_obj from placement.objects import trait as trait_obj from placement.objects import user as user_obj from placement.tests.functional import base DISK_INVENTORY = dict( total=200, reserved=10, min_unit=2, max_unit=5, step_size=1, allocation_ratio=1.0, resource_class=orc.DISK_GB ) DISK_ALLOCATION = dict( consumer_id=uuids.disk_consumer, used=2, resource_class=orc.DISK_GB ) def create_provider(context, name, *aggs, **kwargs): parent = kwargs.get('parent') uuid = kwargs.get('uuid', getattr(uuids, name)) rp = rp_obj.ResourceProvider(context, name=name, uuid=uuid) if parent: rp.parent_provider_uuid = parent rp.create() if aggs: rp.set_aggregates(aggs) return rp def ensure_rc(context, name): try: rc_obj.ResourceClass.get_by_name(context, name) except exception.NotFound: rc_obj.ResourceClass(context, name=name).create() def add_inventory(rp, rc, total, **kwargs): ensure_rc(rp._context, rc) kwargs.setdefault('max_unit', total) inv = inv_obj.Inventory(rp._context, resource_provider=rp, resource_class=rc, total=total, **kwargs) rp.add_inventory(inv) return inv def set_traits(rp, *traits): tlist = [] for tname in traits: try: trait = trait_obj.Trait.get_by_name(rp._context, tname) except exception.TraitNotFound: trait = trait_obj.Trait(rp._context, name=tname) trait.create() tlist.append(trait) rp.set_traits(tlist) return tlist def ensure_consumer(ctx, user, project, consumer_id=None): # NOTE(efried): If not specified, use a random consumer UUID - we don't # want to override any existing allocations from the test case. consumer_id = consumer_id or uuidutils.generate_uuid() try: consumer = consumer_obj.Consumer.get_by_uuid(ctx, consumer_id) except exception.NotFound: consumer = consumer_obj.Consumer( ctx, uuid=consumer_id, user=user, project=project) consumer.create() return consumer def set_allocation(ctx, rp, consumer, rc_used_dict): alloc = [ alloc_obj.Allocation( resource_provider=rp, resource_class=rc, consumer=consumer, used=used) for rc, used in rc_used_dict.items() ] alloc_obj.replace_all(ctx, alloc) return alloc def create_user_and_project(ctx, prefix='fake'): user = user_obj.User(ctx, external_id='%s-user' % prefix) user.create() proj = project_obj.Project(ctx, external_id='%s-project' % prefix) proj.create() return user, proj class PlacementDbBaseTestCase(base.TestCase): def setUp(self): super(PlacementDbBaseTestCase, self).setUp() # we use context in some places and ctx in other. We should only use # context, but let's paper over that for now. self.ctx = self.context self.user_obj, self.project_obj = create_user_and_project(self.ctx) # For debugging purposes, populated by _create_provider and used by # _validate_allocation_requests to make failure results more readable. self.rp_uuid_to_name = {} self.rp_id_to_name = {} def _assert_traits(self, expected_traits, traits_objs): expected_traits.sort() traits = [] for obj in traits_objs: traits.append(obj.name) traits.sort() self.assertEqual(expected_traits, traits) def _assert_traits_in(self, expected_traits, traits_objs): traits = [trait.name for trait in traits_objs] for expected in expected_traits: self.assertIn(expected, traits) def _create_provider(self, name, *aggs, **kwargs): rp = create_provider(self.ctx, name, *aggs, **kwargs) self.rp_uuid_to_name[rp.uuid] = name self.rp_id_to_name[rp.id] = name return rp def get_provider_id_by_name(self, name): rp_ids = [k for k, v in self.rp_id_to_name.items() if v == name] if not len(rp_ids) == 1: raise Exception return rp_ids[0] def allocate_from_provider(self, rp, rc, used, consumer_id=None, consumer=None): if consumer is None: consumer = ensure_consumer( self.ctx, self.user_obj, self.project_obj, consumer_id) alloc_list = set_allocation(self.ctx, rp, consumer, {rc: used}) return alloc_list def _make_allocation(self, inv_dict, alloc_dict): alloc_dict = copy.copy(alloc_dict) rp = self._create_provider('allocation_resource_provider') disk_inv = inv_obj.Inventory(resource_provider=rp, **inv_dict) rp.set_inventory([disk_inv]) consumer_id = alloc_dict.pop('consumer_id') consumer = ensure_consumer( self.ctx, self.user_obj, self.project_obj, consumer_id) alloc = alloc_obj.Allocation( resource_provider=rp, consumer=consumer, **alloc_dict) alloc_obj.replace_all(self.ctx, [alloc]) return rp, alloc def create_aggregate(self, agg_uuid): conn = self.placement_db.get_engine().connect() ins_stmt = rp_obj._AGG_TBL.insert().values(uuid=agg_uuid) with conn.begin(): res = conn.execute(ins_stmt) agg_id = res.inserted_primary_key[0] return agg_id ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_consumer.py0000664000175000017500000002623500000000000026752 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os_resource_classes as orc from oslo_utils.fixture import uuidsentinel as uuids import sqlalchemy as sa from placement import db_api from placement import exception from placement.objects import allocation as alloc_obj from placement.objects import consumer as consumer_obj from placement.objects import project as project_obj from placement.objects import resource_provider as rp_obj from placement.objects import user as user_obj from placement.tests.functional import base from placement.tests.functional.db import test_base as tb CONSUMER_TBL = consumer_obj.CONSUMER_TBL PROJECT_TBL = project_obj.PROJECT_TBL USER_TBL = user_obj.USER_TBL ALLOC_TBL = rp_obj._ALLOC_TBL class ConsumerTestCase(tb.PlacementDbBaseTestCase): def test_non_existing_consumer(self): self.assertRaises( exception.ConsumerNotFound, consumer_obj.Consumer.get_by_uuid, self.ctx, uuids.non_existing_consumer) def test_create_and_get(self): u = user_obj.User(self.ctx, external_id='another-user') u.create() p = project_obj.Project(self.ctx, external_id='another-project') p.create() c = consumer_obj.Consumer( self.ctx, uuid=uuids.consumer, user=u, project=p) c.create() c = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer) self.assertEqual(1, c.id) # Project ID == 1 is fake-project created in setup self.assertEqual(2, c.project.id) # User ID == 1 is fake-user created in setup self.assertEqual(2, c.user.id) self.assertRaises(exception.ConsumerExists, c.create) def test_update(self): """Tests the scenario where a user supplies a different project/user ID for an allocation's consumer and we call Consumer.update() to save that information to the consumers table. """ # First, create the consumer with the "fake-user" and "fake-project" # user/project in the base test class's setUp c = consumer_obj.Consumer( self.ctx, uuid=uuids.consumer, user=self.user_obj, project=self.project_obj) c.create() c = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer) self.assertEqual(self.project_obj.id, c.project.id) self.assertEqual(self.user_obj.id, c.user.id) # Now change the consumer's project and user to a different project another_user = user_obj.User(self.ctx, external_id='another-user') another_user.create() another_proj = project_obj.Project( self.ctx, external_id='another-project') another_proj.create() c.project = another_proj c.user = another_user c.update() c = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer) self.assertEqual(another_proj.id, c.project.id) self.assertEqual(another_user.id, c.user.id) @db_api.placement_context_manager.reader def _get_allocs_with_no_consumer_relationship(ctx): alloc_to_consumer = sa.outerjoin( ALLOC_TBL, CONSUMER_TBL, ALLOC_TBL.c.consumer_id == CONSUMER_TBL.c.uuid) sel = sa.select(ALLOC_TBL.c.consumer_id) sel = sel.select_from(alloc_to_consumer) sel = sel.where(CONSUMER_TBL.c.id.is_(None)) return ctx.session.execute(sel).fetchall() class CreateIncompleteAllocationsMixin(object): """Mixin for test setup to create some allocations with missing consumers """ @db_api.placement_context_manager.writer def _create_leftover_consumer(self, ctx): ins_stmt = CONSUMER_TBL.insert().values( uuid=uuids.unknown_consumer, project_id=999, user_id=999) ctx.session.execute(ins_stmt) @db_api.placement_context_manager.writer def _create_incomplete_allocations(self, ctx, num_of_consumer_allocs=1): # Create some allocations with consumers that don't exist in the # consumers table to represent old allocations that we expect to be # "cleaned up" with consumers table records that point to the sentinel # project/user records. self._create_leftover_consumer(ctx) c1_missing_uuid = uuids.c1_missing c2_missing_uuid = uuids.c2_missing c3_missing_uuid = uuids.c3_missing for c_uuid in (c1_missing_uuid, c2_missing_uuid, c3_missing_uuid): # Create $num_of_consumer_allocs allocations per consumer with # different resource classes. for resource_class_id in range(num_of_consumer_allocs): ins_stmt = ALLOC_TBL.insert().values( resource_provider_id=1, resource_class_id=resource_class_id, consumer_id=c_uuid, used=1) ctx.session.execute(ins_stmt) # Verify there are no records in the projects/users table project_count = ctx.session.scalar( sa.select(sa.func.count('*')).select_from(PROJECT_TBL)) self.assertEqual(0, project_count) user_count = ctx.session.scalar( sa.select(sa.func.count('*')).select_from(USER_TBL)) self.assertEqual(0, user_count) # Verify there are no consumer records for the missing consumers sel = CONSUMER_TBL.select().where( CONSUMER_TBL.c.uuid.in_([c1_missing_uuid, c2_missing_uuid])) res = ctx.session.execute(sel).fetchall() self.assertEqual(0, len(res)) # NOTE(jaypipes): The tb.PlacementDbBaseTestCase creates a project and user # which is why we don't base off that. We want a completely bare DB for this # test. class CreateIncompleteConsumersTestCase( base.TestCase, CreateIncompleteAllocationsMixin): def setUp(self): super(CreateIncompleteConsumersTestCase, self).setUp() self.ctx = self.context def test_create_incomplete_consumers(self): """Test the online data migration that creates incomplete consumer records along with the incomplete consumer project/user records. """ self._create_incomplete_allocations(self.ctx) # We do a "really online" online data migration for incomplete # consumers when calling alloc_obj.get_all_by_consumer_id() and # alloc_obj.get_all_by_resource_provider() and there are still # incomplete consumer records. So, to simulate a situation where the # operator has yet to run the nova-manage online_data_migration CLI # tool completely, we first call # consumer_obj.create_incomplete_consumers() with a batch size of 1. # This should mean there will be two allocation records still remaining # with a missing consumer record (since we create 3 total to begin # with). res = consumer_obj.create_incomplete_consumers(self.ctx, 1) self.assertEqual((1, 1), res) # Confirm there are still 2 incomplete allocations after one # iteration of the migration. res = _get_allocs_with_no_consumer_relationship(self.ctx) self.assertEqual(2, len(res)) class DeleteConsumerIfNoAllocsTestCase(tb.PlacementDbBaseTestCase): def test_delete_consumer_if_no_allocs(self): """alloc_obj.replace_all() should attempt to delete consumers that no longer have any allocations. Due to the REST API not having any way to query for consumers directly (only via the GET /allocations/{consumer_uuid} endpoint which returns an empty dict even when no consumer record exists for the {consumer_uuid}) we need to do this functional test using only the object layer. """ # We will use two consumers in this test, only one of which will get # all of its allocations deleted in a transaction (and we expect that # consumer record to be deleted) c1 = consumer_obj.Consumer( self.ctx, uuid=uuids.consumer1, user=self.user_obj, project=self.project_obj) c1.create() c2 = consumer_obj.Consumer( self.ctx, uuid=uuids.consumer2, user=self.user_obj, project=self.project_obj) c2.create() # Create some inventory that we will allocate cn1 = self._create_provider('cn1') tb.add_inventory(cn1, orc.VCPU, 8) tb.add_inventory(cn1, orc.MEMORY_MB, 2048) tb.add_inventory(cn1, orc.DISK_GB, 2000) # Now allocate some of that inventory to two different consumers allocs = [ alloc_obj.Allocation( consumer=c1, resource_provider=cn1, resource_class=orc.VCPU, used=1), alloc_obj.Allocation( consumer=c1, resource_provider=cn1, resource_class=orc.MEMORY_MB, used=512), alloc_obj.Allocation( consumer=c2, resource_provider=cn1, resource_class=orc.VCPU, used=1), alloc_obj.Allocation( consumer=c2, resource_provider=cn1, resource_class=orc.MEMORY_MB, used=512), ] alloc_obj.replace_all(self.ctx, allocs) # Validate that we have consumer records for both consumers for c_uuid in (uuids.consumer1, uuids.consumer2): c_obj = consumer_obj.Consumer.get_by_uuid(self.ctx, c_uuid) self.assertIsNotNone(c_obj) # OK, now "remove" the allocation for consumer2 by setting the used # value for both allocated resources to 0 and re-running the # alloc_obj.replace_all(). This should end up deleting the # consumer record for consumer2 allocs = [ alloc_obj.Allocation( consumer=c2, resource_provider=cn1, resource_class=orc.VCPU, used=0), alloc_obj.Allocation( consumer=c2, resource_provider=cn1, resource_class=orc.MEMORY_MB, used=0), ] alloc_obj.replace_all(self.ctx, allocs) # consumer1 should still exist... c_obj = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer1) self.assertIsNotNone(c_obj) # but not consumer2... self.assertRaises( exception.NotFound, consumer_obj.Consumer.get_by_uuid, self.ctx, uuids.consumer2) # DELETE /allocations/{consumer_uuid} is the other place where we # delete all allocations for a consumer. Let's delete all for consumer1 # and check that the consumer record is deleted alloc_list = alloc_obj.get_all_by_consumer_id( self.ctx, uuids.consumer1) alloc_obj.delete_all(self.ctx, alloc_list) # consumer1 should no longer exist in the DB since we just deleted all # of its allocations self.assertRaises( exception.NotFound, consumer_obj.Consumer.get_by_uuid, self.ctx, uuids.consumer1) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_consumer_type.py0000664000175000017500000000332200000000000030003 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from placement import exception from placement.objects import consumer_type as ct_obj from placement.tests.functional.db import test_base as tb class ConsumerTypeTestCase(tb.PlacementDbBaseTestCase): def test_get_by_name_and_id(self): ct = ct_obj.ConsumerType(self.context, name='MIGRATION') ct.create() named_ct = ct_obj.ConsumerType.get_by_name(self.context, 'MIGRATION') self.assertEqual(ct.id, named_ct.id) id_ct = ct_obj.ConsumerType.get_by_id(self.context, ct.id) self.assertEqual(ct.name, id_ct.name) def test_id_not_found(self): self.assertRaises( exception.ConsumerTypeNotFound, ct_obj.ConsumerType.get_by_id, self.context, 999999) def test_name_not_found(self): self.assertRaises( exception.ConsumerTypeNotFound, ct_obj.ConsumerType.get_by_name, self.context, 'LOSTPONY') def test_duplicate_create(self): ct = ct_obj.ConsumerType(self.context, name='MIGRATION') ct.create() ct2 = ct_obj.ConsumerType(self.context, name='MIGRATION') self.assertRaises(exception.ConsumerTypeExists, ct2.create) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_migrations.py0000664000175000017500000003367100000000000027275 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for database migrations. There are "opportunistic" tests for sqlite in memory, mysql and postgresql in here, which allows testing against these databases in a properly configured unit test environment. For the opportunistic testing you need to set up a db named 'openstack_citest' with user 'openstack_citest' and password 'openstack_citest' on localhost. This can be accomplished by running the `test-setup.sh` script in the `tools` subdirectory. The test will then use that DB and username/password combo to run the tests. """ from unittest import mock from alembic import script from oslo_db.sqlalchemy import test_fixtures from oslo_db.sqlalchemy import test_migrations from oslo_db.sqlalchemy import utils as db_utils from oslo_log import log as logging from oslo_utils.fixture import uuidsentinel as uuids from sqlalchemy import inspect from placement.db.sqlalchemy import migration from placement.db.sqlalchemy import models from placement import db_api from placement.tests.functional import base LOG = logging.getLogger(__name__) class WalkVersionsMixin(object): def _walk_versions(self): """Determine latest version script from the repo, then upgrade from 1 through to the latest, with no data in the databases. This just checks that the schema itself upgrades successfully. """ # Place the database under version control script_directory = script.ScriptDirectory.from_config(self.config) self.assertIsNone(self.migration_api.version(self.config)) versions = [ver for ver in script_directory.walk_revisions()] for version in reversed(versions): self._migrate_up(version.revision, with_data=True) def _migrate_up(self, version, with_data=False): """Migrate up to a new version of the db. We allow for data insertion and post checks at every migration version with special _pre_upgrade_### and _check_### functions in the main test. """ # NOTE(sdague): try block is here because it's impossible to debug # where a failed data migration happens otherwise try: if with_data: data = None pre_upgrade = getattr( self, "_pre_upgrade_%s" % version, None) if pre_upgrade: data = pre_upgrade(self.engine) self.migration_api.upgrade(version, config=self.config) self.assertEqual(version, self.migration_api.version(self.config)) if with_data: check = getattr(self, "_check_%s" % version, None) if check: check(self.engine, data) except Exception: LOG.error("Failed to migrate to version %(version)s on engine " "%(engine)s", {'version': version, 'engine': self.engine}) raise class TestWalkVersions(base.NoDBTestCase, WalkVersionsMixin): def setUp(self): super(TestWalkVersions, self).setUp() self.migration_api = mock.MagicMock() self.engine = mock.MagicMock() self.config = mock.MagicMock() self.versions = [mock.Mock(revision='2b2'), mock.Mock(revision='1a1')] def test_migrate_up(self): self.migration_api.version.return_value = 'dsa123' self._migrate_up('dsa123') self.migration_api.upgrade.assert_called_with('dsa123', config=self.config) self.migration_api.version.assert_called_with(self.config) def test_migrate_up_with_data(self): test_value = {"a": 1, "b": 2} self.migration_api.version.return_value = '141' self._pre_upgrade_141 = mock.MagicMock() self._pre_upgrade_141.return_value = test_value self._check_141 = mock.MagicMock() self._migrate_up('141', True) self._pre_upgrade_141.assert_called_with(self.engine) self._check_141.assert_called_with(self.engine, test_value) @mock.patch.object(script, 'ScriptDirectory') @mock.patch.object(WalkVersionsMixin, '_migrate_up') def test_walk_versions_all_default(self, _migrate_up, script_directory): fc = script_directory.from_config() fc.walk_revisions.return_value = self.versions self.migration_api.version.return_value = None self._walk_versions() self.migration_api.version.assert_called_with(self.config) upgraded = [mock.call(v.revision, with_data=True) for v in reversed(self.versions)] self.assertEqual(self._migrate_up.call_args_list, upgraded) @mock.patch.object(script, 'ScriptDirectory') @mock.patch.object(WalkVersionsMixin, '_migrate_up') def test_walk_versions_all_false(self, _migrate_up, script_directory): fc = script_directory.from_config() fc.walk_revisions.return_value = self.versions self.migration_api.version.return_value = None self._walk_versions() upgraded = [mock.call(v.revision, with_data=True) for v in reversed(self.versions)] self.assertEqual(upgraded, self._migrate_up.call_args_list) class MigrationCheckersMixin(object): def setUp(self): super(MigrationCheckersMixin, self).setUp() self.engine = db_api.placement_context_manager.writer.get_engine() self.config = migration._alembic_config() self.migration_api = migration def test_walk_versions(self): self._walk_versions() # # Leaving this here as a sort of template for when we do migration tests. # def _check_fb3f10dd262e(self, engine, data): # nodes_tbl = db_utils.get_table(engine, 'nodes') # col_names = [column.name for column in nodes_tbl.c] # self.assertIn('fault', col_names) # self.assertIsInstance(nodes_tbl.c.fault.type, # sqlalchemy.types.String) def test_upgrade_and_version(self): self.migration_api.upgrade('head') self.assertIsNotNone(self.migration_api.version()) def test_upgrade_twice(self): # Start with the empty version self.migration_api.upgrade('base') v1 = self.migration_api.version() # Now upgrade to head self.migration_api.upgrade('head') v2 = self.migration_api.version() self.assertNotEqual(v1, v2) def test_block_on_null_root_provider_id(self): """Upgrades the schema to b4ed3a175331 (initial), injects a resource provider with no root provider and then tries to upgrade to head which should fail on the 611cd6dffd7b blocker migration. """ # Upgrade to populate the schema. self.migration_api.upgrade('b4ed3a175331') # Now insert a resource provider with no root. rps = db_utils.get_table(self.engine, 'resource_providers') ins_stmt = rps.insert().values( name='fake-rp-name', uuid=uuids.rp_uuid, ) with self.engine.connect() as conn, conn.begin(): rp_id = conn.execute(ins_stmt).inserted_primary_key[0] # Now run the blocker migration and it should raise an error. ex = self.assertRaises( # noqa H202 Exception, self.migration_api.upgrade, '611cd6dffd7b') # Make sure it's the error we expect. self.assertIn('There is at least one resource provider table ' 'record which is missing its root provider id.', str(ex)) # Now update the resource provider with a root_provider_id. update_stmt = rps.update().values( root_provider_id=rp_id, ).where(rps.c.id == rp_id) with self.engine.connect() as conn, conn.begin(): conn.execute(update_stmt) # Re-run the upgrade and it should be OK. self.migration_api.upgrade('611cd6dffd7b') def test_block_on_missing_consumer(self): """Upgrades the schema to b4ed3a175331 (initial), injects an allocation without a corresponding consumer record and then tries to upgrade to head which should fail on the b5c396305c25 blocker migration. """ # Upgrade to populate the schema. self.migration_api.upgrade('b4ed3a175331') # Now insert a resource provider to build off rps = db_utils.get_table(self.engine, 'resource_providers') ins_stmt = rps.insert().values( name='fake-rp-name', uuid=uuids.rp_uuid, root_provider_id=1, ) with self.engine.connect() as conn, conn.begin(): rp_id = conn.execute(ins_stmt).inserted_primary_key[0] # Now insert an allocation allocations = db_utils.get_table(self.engine, 'allocations') ins_stmt = allocations.insert().values( resource_provider_id=rp_id, resource_class_id=1, used=5, consumer_id=uuids.consumer1, ) with self.engine.connect() as conn, conn.begin(): conn.execute(ins_stmt).inserted_primary_key[0] # Now run the blocker migration and it should raise an error. ex = self.assertRaises( # noqa H202 Exception, self.migration_api.upgrade, 'b5c396305c25') # Make sure it's the error we expect. self.assertIn('There is at least one allocation record which is ' 'missing a consumer record.', str(ex)) # Add a (faked) consumer record and try again consumers = db_utils.get_table(self.engine, 'consumers') ins_stmt = consumers.insert().values( uuid=uuids.consumer1, project_id=1, user_id=1, ) with self.engine.connect() as conn, conn.begin(): conn.execute(ins_stmt).inserted_primary_key[0] self.migration_api.upgrade('b5c396305c25') def test_consumer_types_422ece571366(self): # Upgrade to populate the schema. self.migration_api.upgrade('422ece571366') insp = inspect(self.engine) # Test creation of consumer_types table con = db_utils.get_table(self.engine, 'consumer_types') col_names = [column.name for column in con.c] self.assertIn('created_at', col_names) self.assertIn('updated_at', col_names) self.assertIn('id', col_names) self.assertIn('name', col_names) # check constraints pkey = insp.get_pk_constraint("consumer_types") self.assertEqual(['id'], pkey['constrained_columns']) ukey = insp.get_unique_constraints("consumer_types") self.assertEqual('uniq_consumer_types0name', ukey[0]['name']) def test_consumer_type_id_column_422ece571366(self): # Upgrade to populate the schema. self.migration_api.upgrade('422ece571366') insp = inspect(self.engine) # Test creation of consumer_types table consumers = db_utils.get_table(self.engine, 'consumers') col_names = [column.name for column in consumers.c] self.assertIn('consumer_type_id', col_names) # Check index and constraints fkey = insp.get_foreign_keys("consumers") self.assertEqual(['consumer_type_id'], fkey[0]['constrained_columns']) ind = insp.get_indexes('consumers') names = [r['name'] for r in ind] self.assertIn('consumers_consumer_type_id_idx', names) class PlacementOpportunisticFixture(object): def get_enginefacade(self): return db_api.placement_context_manager class SQLiteOpportunisticFixture( PlacementOpportunisticFixture, test_fixtures.OpportunisticDbFixture): pass class MySQLOpportunisticFixture( PlacementOpportunisticFixture, test_fixtures.MySQLOpportunisticFixture): pass class PostgresqlOpportunisticFixture( PlacementOpportunisticFixture, test_fixtures.PostgresqlOpportunisticFixture): pass class TestMigrationsSQLite(MigrationCheckersMixin, WalkVersionsMixin, test_fixtures.OpportunisticDBTestMixin, base.NoDBTestCase): FIXTURE = SQLiteOpportunisticFixture class TestMigrationsMySQL(MigrationCheckersMixin, WalkVersionsMixin, test_fixtures.OpportunisticDBTestMixin, base.NoDBTestCase): FIXTURE = MySQLOpportunisticFixture class TestMigrationsPostgresql(MigrationCheckersMixin, WalkVersionsMixin, test_fixtures.OpportunisticDBTestMixin, base.NoDBTestCase): FIXTURE = PostgresqlOpportunisticFixture class _TestModelsMigrations(test_migrations.ModelsMigrationsSync): def get_metadata(self): return models.BASE.metadata def get_engine(self): return db_api.get_placement_engine() def db_sync(self, engine): migration.upgrade('head') class ModelsMigrationsSyncSqlite(_TestModelsMigrations, test_fixtures.OpportunisticDBTestMixin, base.NoDBTestCase): FIXTURE = SQLiteOpportunisticFixture class ModelsMigrationsSyncMysql(_TestModelsMigrations, test_fixtures.OpportunisticDBTestMixin, base.NoDBTestCase): FIXTURE = MySQLOpportunisticFixture class ModelsMigrationsSyncPostgresql(_TestModelsMigrations, test_fixtures.OpportunisticDBTestMixin, base.NoDBTestCase): FIXTURE = PostgresqlOpportunisticFixture ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_project.py0000664000175000017500000000254500000000000026563 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from placement import exception from placement.objects import project as project_obj from placement.tests.functional.db import test_base as tb class ProjectTestCase(tb.PlacementDbBaseTestCase): def test_non_existing_project(self): self.assertRaises( exception.ProjectNotFound, project_obj.Project.get_by_external_id, self.ctx, uuids.non_existing_project) def test_create_and_get(self): p = project_obj.Project(self.ctx, external_id='another-project') p.create() p = project_obj.Project.get_by_external_id(self.ctx, 'another-project') # Project ID == 1 is fake-project created in setup self.assertEqual(2, p.id) self.assertRaises(exception.ProjectExists, p.create) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_reshape.py0000664000175000017500000004043100000000000026540 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from placement import exception from placement.objects import allocation as alloc_obj from placement.objects import consumer as consumer_obj from placement.objects import inventory as inv_obj from placement.objects import reshaper from placement.objects import resource_provider as rp_obj from placement.tests.functional.db import test_base as tb def alloc_for_rc(alloc_list, rc): for alloc in alloc_list: if alloc.resource_class == rc: return alloc class ReshapeTestCase(tb.PlacementDbBaseTestCase): """Test 'replace the world' reshape transaction.""" def test_reshape(self): """We set up the following scenario: BEFORE: single compute node setup A single compute node with: - VCPU, MEMORY_MB, DISK_GB inventory - Two instances consuming CPU, RAM and DISK from that compute node AFTER: hierarchical + shared storage setup A compute node parent provider with: - MEMORY_MB Two NUMA node child providers containing: - VCPU Shared storage provider with: - DISK_GB Both instances have their resources split among the providers and shared storage accordingly """ # First create our consumers i1_uuid = uuids.instance1 i1_consumer = consumer_obj.Consumer( self.ctx, uuid=i1_uuid, user=self.user_obj, project=self.project_obj) i1_consumer.create() i2_uuid = uuids.instance2 i2_consumer = consumer_obj.Consumer( self.ctx, uuid=i2_uuid, user=self.user_obj, project=self.project_obj) i2_consumer.create() cn1 = self._create_provider('cn1') tb.add_inventory(cn1, 'VCPU', 16) tb.add_inventory(cn1, 'MEMORY_MB', 32768) tb.add_inventory(cn1, 'DISK_GB', 1000) # Allocate both instances against the single compute node for consumer in (i1_consumer, i2_consumer): allocs = [ alloc_obj.Allocation( resource_provider=cn1, resource_class='VCPU', consumer=consumer, used=2), alloc_obj.Allocation( resource_provider=cn1, resource_class='MEMORY_MB', consumer=consumer, used=1024), alloc_obj.Allocation( resource_provider=cn1, resource_class='DISK_GB', consumer=consumer, used=100), ] alloc_obj.replace_all(self.ctx, allocs) # Verify we have the allocations we expect for the BEFORE scenario before_allocs_i1 = alloc_obj.get_all_by_consumer_id(self.ctx, i1_uuid) self.assertEqual(3, len(before_allocs_i1)) self.assertEqual(cn1.uuid, before_allocs_i1[0].resource_provider.uuid) before_allocs_i2 = alloc_obj.get_all_by_consumer_id(self.ctx, i2_uuid) self.assertEqual(3, len(before_allocs_i2)) self.assertEqual(cn1.uuid, before_allocs_i2[2].resource_provider.uuid) # Before we issue the actual reshape() call, we need to first create # the child providers and sharing storage provider. These are actions # that the virt driver or external agent is responsible for performing # *before* attempting any reshape activity. cn1_numa0 = self._create_provider('cn1_numa0', parent=cn1.uuid) cn1_numa1 = self._create_provider('cn1_numa1', parent=cn1.uuid) ss = self._create_provider('ss') # OK, now emulate the call to POST /reshaper that will be triggered by # a virt driver wanting to replace the world and change its modeling # from a single provider to a nested provider tree along with a sharing # storage provider. after_inventories = { # cn1 keeps the RAM only cn1: [ inv_obj.Inventory( resource_provider=cn1, resource_class='MEMORY_MB', total=32768, reserved=0, max_unit=32768, min_unit=1, step_size=1, allocation_ratio=1.0), ], # each NUMA node gets half of the CPUs cn1_numa0: [ inv_obj.Inventory( resource_provider=cn1_numa0, resource_class='VCPU', total=8, reserved=0, max_unit=8, min_unit=1, step_size=1, allocation_ratio=1.0), ], cn1_numa1: [ inv_obj.Inventory( resource_provider=cn1_numa1, resource_class='VCPU', total=8, reserved=0, max_unit=8, min_unit=1, step_size=1, allocation_ratio=1.0), ], # The sharing provider gets a bunch of disk ss: [ inv_obj.Inventory( resource_provider=ss, resource_class='DISK_GB', total=100000, reserved=0, max_unit=1000, min_unit=1, step_size=1, allocation_ratio=1.0), ], } # We do a fetch from the DB for each instance to get its latest # generation. This would be done by the resource tracker or scheduler # report client before issuing the call to reshape() because the # consumers representing the two instances above will have had their # generations incremented in the original call to PUT # /allocations/{consumer_uuid} i1_consumer = consumer_obj.Consumer.get_by_uuid(self.ctx, i1_uuid) i2_consumer = consumer_obj.Consumer.get_by_uuid(self.ctx, i2_uuid) after_allocs = [ # instance1 gets VCPU from NUMA0, MEMORY_MB from cn1 and DISK_GB # from the sharing storage provider alloc_obj.Allocation( resource_provider=cn1_numa0, resource_class='VCPU', consumer=i1_consumer, used=2), alloc_obj.Allocation( resource_provider=cn1, resource_class='MEMORY_MB', consumer=i1_consumer, used=1024), alloc_obj.Allocation( resource_provider=ss, resource_class='DISK_GB', consumer=i1_consumer, used=100), # instance2 gets VCPU from NUMA1, MEMORY_MB from cn1 and DISK_GB # from the sharing storage provider alloc_obj.Allocation( resource_provider=cn1_numa1, resource_class='VCPU', consumer=i2_consumer, used=2), alloc_obj.Allocation( resource_provider=cn1, resource_class='MEMORY_MB', consumer=i2_consumer, used=1024), alloc_obj.Allocation( resource_provider=ss, resource_class='DISK_GB', consumer=i2_consumer, used=100), ] reshaper.reshape(self.ctx, after_inventories, after_allocs) # Verify that the inventories have been moved to the appropriate # providers in the AFTER scenario # The root compute node should only have MEMORY_MB, nothing else cn1_inv = inv_obj.get_all_by_resource_provider(self.ctx, cn1) self.assertEqual(1, len(cn1_inv)) self.assertEqual('MEMORY_MB', cn1_inv[0].resource_class) self.assertEqual(32768, cn1_inv[0].total) # Each NUMA node should only have half the original VCPU, nothing else numa0_inv = inv_obj.get_all_by_resource_provider(self.ctx, cn1_numa0) self.assertEqual(1, len(numa0_inv)) self.assertEqual('VCPU', numa0_inv[0].resource_class) self.assertEqual(8, numa0_inv[0].total) numa1_inv = inv_obj.get_all_by_resource_provider(self.ctx, cn1_numa1) self.assertEqual(1, len(numa1_inv)) self.assertEqual('VCPU', numa1_inv[0].resource_class) self.assertEqual(8, numa1_inv[0].total) # The sharing storage provider should only have DISK_GB, nothing else ss_inv = inv_obj.get_all_by_resource_provider(self.ctx, ss) self.assertEqual(1, len(ss_inv)) self.assertEqual('DISK_GB', ss_inv[0].resource_class) self.assertEqual(100000, ss_inv[0].total) # Verify we have the allocations we expect for the AFTER scenario after_allocs_i1 = alloc_obj.get_all_by_consumer_id(self.ctx, i1_uuid) self.assertEqual(3, len(after_allocs_i1)) # Our VCPU allocation should be in the NUMA0 node vcpu_alloc = alloc_for_rc(after_allocs_i1, 'VCPU') self.assertIsNotNone(vcpu_alloc) self.assertEqual(cn1_numa0.uuid, vcpu_alloc.resource_provider.uuid) # Our DISK_GB allocation should be in the sharing provider disk_alloc = alloc_for_rc(after_allocs_i1, 'DISK_GB') self.assertIsNotNone(disk_alloc) self.assertEqual(ss.uuid, disk_alloc.resource_provider.uuid) # And our MEMORY_MB should remain on the root compute node ram_alloc = alloc_for_rc(after_allocs_i1, 'MEMORY_MB') self.assertIsNotNone(ram_alloc) self.assertEqual(cn1.uuid, ram_alloc.resource_provider.uuid) after_allocs_i2 = alloc_obj.get_all_by_consumer_id(self.ctx, i2_uuid) self.assertEqual(3, len(after_allocs_i2)) # Our VCPU allocation should be in the NUMA1 node vcpu_alloc = alloc_for_rc(after_allocs_i2, 'VCPU') self.assertIsNotNone(vcpu_alloc) self.assertEqual(cn1_numa1.uuid, vcpu_alloc.resource_provider.uuid) # Our DISK_GB allocation should be in the sharing provider disk_alloc = alloc_for_rc(after_allocs_i2, 'DISK_GB') self.assertIsNotNone(disk_alloc) self.assertEqual(ss.uuid, disk_alloc.resource_provider.uuid) # And our MEMORY_MB should remain on the root compute node ram_alloc = alloc_for_rc(after_allocs_i2, 'MEMORY_MB') self.assertIsNotNone(ram_alloc) self.assertEqual(cn1.uuid, ram_alloc.resource_provider.uuid) def test_reshape_concurrent_inventory_update(self): """Valid failure scenario for reshape(). We test a situation where the virt driver has constructed it's "after inventories and allocations" and sent those to the POST /reshape endpoint. The reshape POST handler does a quick check of the resource provider generations sent in the payload and they all check out. However, right before the call to resource_provider.reshape(), another thread legitimately changes the inventory of one of the providers involved in the reshape transaction. We should get a ConcurrentUpdateDetected in this case. """ # First create our consumers i1_uuid = uuids.instance1 i1_consumer = consumer_obj.Consumer( self.ctx, uuid=i1_uuid, user=self.user_obj, project=self.project_obj) i1_consumer.create() # then all our original providers cn1 = self._create_provider('cn1') tb.add_inventory(cn1, 'VCPU', 16) tb.add_inventory(cn1, 'MEMORY_MB', 32768) tb.add_inventory(cn1, 'DISK_GB', 1000) # Allocate an instance on our compute node allocs = [ alloc_obj.Allocation( resource_provider=cn1, resource_class='VCPU', consumer=i1_consumer, used=2), alloc_obj.Allocation( resource_provider=cn1, resource_class='MEMORY_MB', consumer=i1_consumer, used=1024), alloc_obj.Allocation( resource_provider=cn1, resource_class='DISK_GB', consumer=i1_consumer, used=100), ] alloc_obj.replace_all(self.ctx, allocs) # Before we issue the actual reshape() call, we need to first create # the child providers and sharing storage provider. These are actions # that the virt driver or external agent is responsible for performing # *before* attempting any reshape activity. cn1_numa0 = self._create_provider('cn1_numa0', parent=cn1.uuid) cn1_numa1 = self._create_provider('cn1_numa1', parent=cn1.uuid) ss = self._create_provider('ss') # OK, now emulate the call to POST /reshaper that will be triggered by # a virt driver wanting to replace the world and change its modeling # from a single provider to a nested provider tree along with a sharing # storage provider. after_inventories = { # cn1 keeps the RAM only cn1: [ inv_obj.Inventory( resource_provider=cn1, resource_class='MEMORY_MB', total=32768, reserved=0, max_unit=32768, min_unit=1, step_size=1, allocation_ratio=1.0), ], # each NUMA node gets half of the CPUs cn1_numa0: [ inv_obj.Inventory( resource_provider=cn1_numa0, resource_class='VCPU', total=8, reserved=0, max_unit=8, min_unit=1, step_size=1, allocation_ratio=1.0), ], cn1_numa1: [ inv_obj.Inventory( resource_provider=cn1_numa1, resource_class='VCPU', total=8, reserved=0, max_unit=8, min_unit=1, step_size=1, allocation_ratio=1.0), ], # The sharing provider gets a bunch of disk ss: [ inv_obj.Inventory( resource_provider=ss, resource_class='DISK_GB', total=100000, reserved=0, max_unit=1000, min_unit=1, step_size=1, allocation_ratio=1.0), ], } # We do a fetch from the DB for each instance to get its latest # generation. This would be done by the resource tracker or scheduler # report client before issuing the call to reshape() because the # consumers representing the two instances above will have had their # generations incremented in the original call to PUT # /allocations/{consumer_uuid} i1_consumer = consumer_obj.Consumer.get_by_uuid(self.ctx, i1_uuid) after_allocs = [ # instance1 gets VCPU from NUMA0, MEMORY_MB from cn1 and DISK_GB # from the sharing storage provider alloc_obj.Allocation( resource_provider=cn1_numa0, resource_class='VCPU', consumer=i1_consumer, used=2), alloc_obj.Allocation( resource_provider=cn1, resource_class='MEMORY_MB', consumer=i1_consumer, used=1024), alloc_obj.Allocation( resource_provider=ss, resource_class='DISK_GB', consumer=i1_consumer, used=100), ] # OK, now before we call reshape(), here we emulate another thread # changing the inventory for the sharing storage provider in between # the time in the REST handler when the sharing storage provider's # generation was validated and the actual call to reshape() ss_threadB = rp_obj.ResourceProvider.get_by_uuid(self.ctx, ss.uuid) # Reduce the amount of storage to 2000, from 100000. new_ss_inv = [ inv_obj.Inventory( resource_provider=ss_threadB, resource_class='DISK_GB', total=2000, reserved=0, max_unit=1000, min_unit=1, step_size=1, allocation_ratio=1.0)] ss_threadB.set_inventory(new_ss_inv) # Double check our storage provider's generation is now greater than # the original storage provider record being sent to reshape() self.assertGreater(ss_threadB.generation, ss.generation) # And we should legitimately get a failure now to reshape() due to # another thread updating one of the involved provider's generations self.assertRaises( exception.ConcurrentUpdateDetected, reshaper.reshape, self.ctx, after_inventories, after_allocs) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_resource_class.py0000664000175000017500000002157700000000000030137 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import os_resource_classes as orc from oslo_utils.fixture import uuidsentinel import placement from placement import exception from placement.objects import inventory as inv_obj from placement.objects import resource_class as rc_obj from placement.objects import resource_provider as rp_obj from placement.tests.functional.db import test_base as tb class ResourceClassListTestCase(tb.PlacementDbBaseTestCase): def test_get_all_no_custom(self): """Test that if we haven't yet added any custom resource classes, that we only get a list of ResourceClass objects representing the standard classes. """ rcs = rc_obj.get_all(self.ctx) self.assertEqual(len(orc.STANDARDS), len(rcs)) def test_get_all_with_custom(self): """Test that if we add some custom resource classes, that we get a list of ResourceClass objects representing the standard classes as well as the custom classes. """ customs = [ ('CUSTOM_IRON_NFV', 10001), ('CUSTOM_IRON_ENTERPRISE', 10002), ] with self.placement_db.get_engine().connect() as conn: with conn.begin(): for custom in customs: c_name, c_id = custom ins = rc_obj._RC_TBL.insert().values(id=c_id, name=c_name) conn.execute(ins) rcs = rc_obj.get_all(self.ctx) expected_count = (len(orc.STANDARDS) + len(customs)) self.assertEqual(expected_count, len(rcs)) class ResourceClassTestCase(tb.PlacementDbBaseTestCase): def test_get_by_name(self): rc = rc_obj.ResourceClass.get_by_name( self.ctx, orc.VCPU ) vcpu_id = orc.STANDARDS.index(orc.VCPU) self.assertEqual(vcpu_id, rc.id) self.assertEqual(orc.VCPU, rc.name) def test_get_by_name_not_found(self): self.assertRaises(exception.ResourceClassNotFound, rc_obj.ResourceClass.get_by_name, self.ctx, 'CUSTOM_NO_EXISTS') def test_get_by_name_custom(self): rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) rc.create() get_rc = rc_obj.ResourceClass.get_by_name( self.ctx, 'CUSTOM_IRON_NFV', ) self.assertEqual(rc.id, get_rc.id) self.assertEqual(rc.name, get_rc.name) def test_create_fail_not_using_namespace(self): rc = rc_obj.ResourceClass( context=self.ctx, name='IRON_NFV', ) exc = self.assertRaises(exception.ObjectActionError, rc.create) self.assertIn('name must start with', str(exc)) def test_create_duplicate_standard(self): rc = rc_obj.ResourceClass( context=self.ctx, name=orc.VCPU, ) self.assertRaises(exception.ResourceClassExists, rc.create) def test_create(self): rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) rc.create() min_id = rc_obj.ResourceClass.MIN_CUSTOM_RESOURCE_CLASS_ID self.assertEqual(min_id, rc.id) rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_ENTERPRISE', ) rc.create() self.assertEqual(min_id + 1, rc.id) @mock.patch.object(placement.objects.resource_class.ResourceClass, "_get_next_id") def test_create_duplicate_id_retry(self, mock_get): # This order of ID generation will create rc1 with an ID of 42, try to # create rc2 with the same ID, and then return 43 in the retry loop. mock_get.side_effect = (42, 42, 43) rc1 = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) rc1.create() rc2 = rc_obj.ResourceClass( self.ctx, name='CUSTOM_TWO', ) rc2.create() self.assertEqual(rc1.id, 42) self.assertEqual(rc2.id, 43) @mock.patch.object(placement.objects.resource_class.ResourceClass, "_get_next_id") def test_create_duplicate_id_retry_failing(self, mock_get): """negative case for test_create_duplicate_id_retry""" # This order of ID generation will create rc1 with an ID of 44, try to # create rc2 with the same ID, and then return 45 in the retry loop. mock_get.side_effect = (44, 44, 44, 44) rc1 = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) rc1.create() rc2 = rc_obj.ResourceClass( self.ctx, name='CUSTOM_TWO', ) rc2.RESOURCE_CREATE_RETRY_COUNT = 3 self.assertRaises(exception.MaxDBRetriesExceeded, rc2.create) def test_create_duplicate_custom(self): rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) rc.create() self.assertEqual(rc_obj.ResourceClass.MIN_CUSTOM_RESOURCE_CLASS_ID, rc.id) rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) self.assertRaises(exception.ResourceClassExists, rc.create) def test_destroy_fail_no_id(self): rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) self.assertRaises(exception.ObjectActionError, rc.destroy) def test_destroy_fail_standard(self): rc = rc_obj.ResourceClass.get_by_name( self.ctx, 'VCPU', ) self.assertRaises(exception.ResourceClassCannotDeleteStandard, rc.destroy) def test_destroy(self): rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) rc.create() rc_list = rc_obj.get_all(self.ctx) rc_ids = (r.id for r in rc_list) self.assertIn(rc.id, rc_ids) rc = rc_obj.ResourceClass.get_by_name( self.ctx, 'CUSTOM_IRON_NFV', ) rc.destroy() rc_list = rc_obj.get_all(self.ctx) rc_ids = (r.id for r in rc_list) self.assertNotIn(rc.id, rc_ids) # Verify rc cache was purged of the old entry self.assertRaises(exception.ResourceClassNotFound, rc_obj.ResourceClass.get_by_name, self.ctx, 'CUSTOM_IRON_NFV') def test_destroy_fail_with_inventory(self): """Test that we raise an exception when attempting to delete a resource class that is referenced in an inventory record. """ rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) rc.create() rp = rp_obj.ResourceProvider( self.ctx, name='my rp', uuid=uuidsentinel.rp, ) rp.create() inv = inv_obj.Inventory( resource_provider=rp, resource_class='CUSTOM_IRON_NFV', total=1, ) rp.set_inventory([inv]) self.assertRaises(exception.ResourceClassInUse, rc.destroy) rp.set_inventory([]) rc.destroy() rc_list = rc_obj.get_all(self.ctx) rc_ids = (r.id for r in rc_list) self.assertNotIn(rc.id, rc_ids) def test_save_fail_no_id(self): rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) self.assertRaises(exception.ObjectActionError, rc.save) def test_save_fail_standard(self): rc = rc_obj.ResourceClass.get_by_name( self.ctx, 'VCPU', ) self.assertRaises(exception.ResourceClassCannotUpdateStandard, rc.save) def test_save(self): rc = rc_obj.ResourceClass( self.ctx, name='CUSTOM_IRON_NFV', ) rc.create() rc = rc_obj.ResourceClass.get_by_name( self.ctx, 'CUSTOM_IRON_NFV', ) rc.name = 'CUSTOM_IRON_SILVER' rc.save() # Verify rc cache was purged of the old entry self.assertRaises(exception.NotFound, rc_obj.ResourceClass.get_by_name, self.ctx, 'CUSTOM_IRON_NFV') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_resource_provider.py0000664000175000017500000016127600000000000030665 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import os_resource_classes as orc from oslo_db import exception as db_exc from oslo_utils.fixture import uuidsentinel from placement.db.sqlalchemy import models from placement import exception from placement import lib as placement_lib from placement.objects import allocation as alloc_obj from placement.objects import inventory as inv_obj from placement.objects import research_context as res_ctx from placement.objects import resource_provider as rp_obj from placement.objects import trait as trait_obj from placement.objects import usage as usage_obj from placement.tests.functional.db import test_base as tb class ResourceProviderTestCase(tb.PlacementDbBaseTestCase): """Test resource-provider objects' lifecycles.""" def test_create_resource_provider_requires_uuid(self): resource_provider = rp_obj.ResourceProvider(context=self.ctx) self.assertRaises(exception.ObjectActionError, resource_provider.create) def test_create_unknown_parent_provider(self): """Test that if we provide a parent_provider_uuid value that points to a resource provider that doesn't exist, that we get an ObjectActionError. """ rp = rp_obj.ResourceProvider( context=self.ctx, name='rp1', uuid=uuidsentinel.rp1, parent_provider_uuid=uuidsentinel.noexists) exc = self.assertRaises(exception.ObjectActionError, rp.create) self.assertIn('parent provider UUID does not exist', str(exc)) def test_create_with_parent_provider_uuid_same_as_uuid_fail(self): """Setting a parent provider UUID to one's own UUID makes no sense, so check we don't support it. """ cn1 = rp_obj.ResourceProvider( context=self.ctx, uuid=uuidsentinel.cn1, name='cn1', parent_provider_uuid=uuidsentinel.cn1) exc = self.assertRaises(exception.ObjectActionError, cn1.create) self.assertIn('parent provider UUID cannot be same as UUID', str(exc)) def test_create_resource_provider(self): created_resource_provider = self._create_provider( uuidsentinel.fake_resource_name, uuid=uuidsentinel.fake_resource_provider, ) self.assertIsInstance(created_resource_provider.id, int) retrieved_resource_provider = rp_obj.ResourceProvider.get_by_uuid( self.ctx, uuidsentinel.fake_resource_provider ) self.assertEqual(retrieved_resource_provider.id, created_resource_provider.id) self.assertEqual(retrieved_resource_provider.uuid, created_resource_provider.uuid) self.assertEqual(retrieved_resource_provider.name, created_resource_provider.name) self.assertEqual(0, created_resource_provider.generation) self.assertEqual(0, retrieved_resource_provider.generation) self.assertIsNone(retrieved_resource_provider.parent_provider_uuid) def test_create_with_parent_provider_uuid(self): self._create_provider('p1', uuid=uuidsentinel.create_p) child = self._create_provider('c1', uuid=uuidsentinel.create_c, parent=uuidsentinel.create_p) self.assertEqual(uuidsentinel.create_c, child.uuid) self.assertEqual(uuidsentinel.create_p, child.parent_provider_uuid) self.assertEqual(uuidsentinel.create_p, child.root_provider_uuid) def test_inherit_root_from_parent(self): """Tests that if we update an existing provider's parent provider UUID, that the root provider UUID of the updated provider is automatically set to the parent provider's root provider UUID. """ rp1 = self._create_provider('rp1') # Test the root was auto-set to the create provider's UUID self.assertEqual(uuidsentinel.rp1, rp1.root_provider_uuid) # Create a new provider that we will make the parent of rp1 parent_rp = self._create_provider('parent') self.assertEqual(uuidsentinel.parent, parent_rp.root_provider_uuid) # Now change rp1 to be a child of parent and check rp1's root is # changed to that of the parent. rp1.parent_provider_uuid = parent_rp.uuid rp1.save() self.assertEqual(uuidsentinel.parent, rp1.root_provider_uuid) def test_save_unknown_parent_provider(self): """Test that if we provide a parent_provider_uuid value that points to a resource provider that doesn't exist, that we get an ObjectActionError if we save the object. """ self.assertRaises( exception.ObjectActionError, self._create_provider, 'rp1', parent=uuidsentinel.noexists) def test_save_resource_provider(self): created_resource_provider = self._create_provider( uuidsentinel.fake_resource_name, uuid=uuidsentinel.fake_resource_provider, ) created_resource_provider.name = 'new-name' created_resource_provider.save() retrieved_resource_provider = rp_obj.ResourceProvider.get_by_uuid( self.ctx, uuidsentinel.fake_resource_provider ) self.assertEqual('new-name', retrieved_resource_provider.name) def test_get_subtree(self): root1 = self._create_provider('root1') child1 = self._create_provider('child1', parent=root1.uuid) child2 = self._create_provider('child2', parent=root1.uuid) grandchild1 = self._create_provider('grandchild1', parent=child1.uuid) grandchild2 = self._create_provider('grandchild2', parent=child1.uuid) grandchild3 = self._create_provider('grandchild3', parent=child2.uuid) grandchild4 = self._create_provider('grandchild4', parent=child2.uuid) self.assertEqual( {grandchild1.uuid}, {rp.uuid for rp in grandchild1.get_subtree(self.context)}) self.assertEqual( {child1.uuid, grandchild1.uuid, grandchild2.uuid}, {rp.uuid for rp in child1.get_subtree(self.context)}) self.assertEqual( {child2.uuid, grandchild3.uuid, grandchild4.uuid}, {rp.uuid for rp in child2.get_subtree(self.context)}) self.assertEqual( {root1.uuid, child1.uuid, child2.uuid, grandchild1.uuid, grandchild2.uuid, grandchild3.uuid, grandchild4.uuid}, {rp.uuid for rp in root1.get_subtree(self.context)}) def test_save_reparenting_not_allowed(self): """Tests that we prevent a resource provider's parent provider UUID from being changed from a non-NULL value to another non-NULL value if not explicitly requested. """ cn1 = self._create_provider('cn1') self._create_provider('cn2') self._create_provider('cn3') # First, make sure we can set the parent for a provider that does not # have a parent currently cn1.parent_provider_uuid = uuidsentinel.cn2 cn1.save() # Now make sure we can't change the parent provider cn1.parent_provider_uuid = uuidsentinel.cn3 exc = self.assertRaises(exception.ObjectActionError, cn1.save) self.assertIn('re-parenting a provider is not currently', str(exc)) # Also ensure that we can't "un-parent" a provider cn1.parent_provider_uuid = None exc = self.assertRaises(exception.ObjectActionError, cn1.save) self.assertIn('un-parenting a provider is not currently', str(exc)) def test_save_reparent_same_tree(self): root1 = self._create_provider('root1') child1 = self._create_provider('child1', parent=root1.uuid) child2 = self._create_provider('child2', parent=root1.uuid) self._create_provider('grandchild1', parent=child1.uuid) self._create_provider('grandchild2', parent=child1.uuid) self._create_provider('grandchild3', parent=child2.uuid) self._create_provider('grandchild4', parent=child2.uuid) test_rp = self._create_provider('test_rp', parent=child1.uuid) test_rp_child = self._create_provider( 'test_rp_child', parent=test_rp.uuid) # move test_rp RP upwards test_rp.parent_provider_uuid = root1.uuid test_rp.save(allow_reparenting=True) # to make sure that this re-parenting does not effect the child test RP # in the db we need to reload it before we assert any change test_rp_child = rp_obj.ResourceProvider.get_by_uuid( self.ctx, test_rp_child.uuid) self.assertEqual(root1.uuid, test_rp.parent_provider_uuid) self.assertEqual(root1.uuid, test_rp.root_provider_uuid) self.assertEqual(test_rp.uuid, test_rp_child.parent_provider_uuid) self.assertEqual(root1.uuid, test_rp_child.root_provider_uuid) # move downwards test_rp.parent_provider_uuid = child1.uuid test_rp.save(allow_reparenting=True) # to make sure that this re-parenting does not effect the child test RP # in the db we need to reload it before we assert any change test_rp_child = rp_obj.ResourceProvider.get_by_uuid( self.ctx, test_rp_child.uuid) self.assertEqual(child1.uuid, test_rp.parent_provider_uuid) self.assertEqual(root1.uuid, test_rp.root_provider_uuid) self.assertEqual(test_rp.uuid, test_rp_child.parent_provider_uuid) self.assertEqual(root1.uuid, test_rp_child.root_provider_uuid) # move sideways test_rp.parent_provider_uuid = child2.uuid test_rp.save(allow_reparenting=True) # to make sure that this re-parenting does not effect the child test RP # in the db we need to reload it before we assert any change test_rp_child = rp_obj.ResourceProvider.get_by_uuid( self.ctx, test_rp_child.uuid) self.assertEqual(child2.uuid, test_rp.parent_provider_uuid) self.assertEqual(root1.uuid, test_rp.root_provider_uuid) self.assertEqual(test_rp.uuid, test_rp_child.parent_provider_uuid) self.assertEqual(root1.uuid, test_rp_child.root_provider_uuid) def test_save_reparent_another_tree(self): root1 = self._create_provider('root1') child1 = self._create_provider('child1', parent=root1.uuid) self._create_provider('child2', parent=root1.uuid) root2 = self._create_provider('root2') self._create_provider('child3', parent=root2.uuid) child4 = self._create_provider('child4', parent=root2.uuid) test_rp = self._create_provider('test_rp', parent=child1.uuid) test_rp_child = self._create_provider( 'test_rp_child', parent=test_rp.uuid) test_rp.parent_provider_uuid = child4.uuid test_rp.save(allow_reparenting=True) # the re-parenting affected the the child test RP in the db so we # have to reload it and assert the change test_rp_child = rp_obj.ResourceProvider.get_by_uuid( self.ctx, test_rp_child.uuid) self.assertEqual(child4.uuid, test_rp.parent_provider_uuid) self.assertEqual(root2.uuid, test_rp.root_provider_uuid) self.assertEqual(test_rp.uuid, test_rp_child.parent_provider_uuid) self.assertEqual(root2.uuid, test_rp_child.root_provider_uuid) def test_save_reparent_to_new_root(self): root1 = self._create_provider('root1') child1 = self._create_provider('child1', parent=root1.uuid) test_rp = self._create_provider('test_rp', parent=child1.uuid) test_rp_child = self._create_provider( 'test_rp_child', parent=test_rp.uuid) # we are creating a new root from a subtree, a.k.a un-parenting test_rp.parent_provider_uuid = None test_rp.save(allow_reparenting=True) # the un-parenting affected the the child test RP in the db so we # have to reload it and assert the change test_rp_child = rp_obj.ResourceProvider.get_by_uuid( self.ctx, test_rp_child.uuid) self.assertIsNone(test_rp.parent_provider_uuid) self.assertEqual(test_rp.uuid, test_rp.root_provider_uuid) self.assertEqual(test_rp.uuid, test_rp_child.parent_provider_uuid) self.assertEqual(test_rp.uuid, test_rp_child.root_provider_uuid) def test_save_reparent_the_root(self): root1 = self._create_provider('root1') child1 = self._create_provider('child1', parent=root1.uuid) # now the test_rp is also a root RP test_rp = self._create_provider('test_rp') test_rp_child = self._create_provider( 'test_rp_child', parent=test_rp.uuid) test_rp.parent_provider_uuid = child1.uuid test_rp.save(allow_reparenting=True) # the re-parenting affected the the child test RP in the db so we # have to reload it and assert the change test_rp_child = rp_obj.ResourceProvider.get_by_uuid( self.ctx, test_rp_child.uuid) self.assertEqual(child1.uuid, test_rp.parent_provider_uuid) self.assertEqual(root1.uuid, test_rp.root_provider_uuid) self.assertEqual(test_rp.uuid, test_rp_child.parent_provider_uuid) self.assertEqual(root1.uuid, test_rp_child.root_provider_uuid) def test_save_reparent_loop_fail(self): root1 = self._create_provider('root1') test_rp = self._create_provider('test_rp', parent=root1.uuid) test_rp_child = self._create_provider( 'test_rp_child', parent=test_rp.uuid) test_rp_grandchild = self._create_provider( 'test_rp_grandchild', parent=test_rp_child.uuid) # self loop, i.e. we are our parents test_rp.parent_provider_uuid = test_rp.uuid exc = self.assertRaises( exception.ObjectActionError, test_rp.save, allow_reparenting=True) self.assertIn( 'creating loop in the provider tree is not allowed.', str(exc)) # direct loop, i.e. our child is our parent test_rp.parent_provider_uuid = test_rp_child.uuid exc = self.assertRaises( exception.ObjectActionError, test_rp.save, allow_reparenting=True) self.assertIn( 'creating loop in the provider tree is not allowed.', str(exc)) # indirect loop, i.e. our grandchild is our parent test_rp.parent_provider_uuid = test_rp_grandchild.uuid exc = self.assertRaises( exception.ObjectActionError, test_rp.save, allow_reparenting=True) self.assertIn( 'creating loop in the provider tree is not allowed.', str(exc)) def test_nested_providers(self): """Create a hierarchy of resource providers and run through a series of tests that ensure one cannot delete a resource provider that has no direct allocations but its child providers do have allocations. """ root_rp = self._create_provider('root_rp') child_rp = self._create_provider('child_rp', parent=uuidsentinel.root_rp) grandchild_rp = self._create_provider('grandchild_rp', parent=uuidsentinel.child_rp) # Verify that the root_provider_uuid of both the child and the # grandchild is the UUID of the grandparent self.assertEqual(root_rp.uuid, child_rp.root_provider_uuid) self.assertEqual(root_rp.uuid, grandchild_rp.root_provider_uuid) # Create some inventory in the grandchild, allocate some consumers to # the grandchild and then attempt to delete the root provider and child # provider, both of which should fail. tb.add_inventory(grandchild_rp, orc.VCPU, 1) # Check all providers returned when getting by root UUID rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'in_tree': uuidsentinel.root_rp, } ) self.assertEqual(3, len(rps)) # Check all providers returned when getting by child UUID rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'in_tree': uuidsentinel.child_rp, } ) self.assertEqual(3, len(rps)) # Check all providers returned when getting by grandchild UUID rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'in_tree': uuidsentinel.grandchild_rp, } ) self.assertEqual(3, len(rps)) # Make sure that the member_of and uuid filters work with the in_tree # filter # No aggregate associations yet, so expect no records when adding a # member_of filter rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'member_of': [[uuidsentinel.agg]], 'in_tree': uuidsentinel.grandchild_rp, } ) self.assertEqual(0, len(rps)) # OK, associate the grandchild with an aggregate and verify that ONLY # the grandchild is returned when asking for the grandchild's tree # along with the aggregate as member_of grandchild_rp.set_aggregates([uuidsentinel.agg]) rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'member_of': [[uuidsentinel.agg]], 'in_tree': uuidsentinel.grandchild_rp, } ) self.assertEqual(1, len(rps)) self.assertEqual(uuidsentinel.grandchild_rp, rps[0].uuid) # Try filtering on an unknown UUID and verify no results rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'uuid': uuidsentinel.unknown_rp, 'in_tree': uuidsentinel.grandchild_rp, } ) self.assertEqual(0, len(rps)) # And now check that filtering for just the child's UUID along with the # tree produces just a single provider (the child) rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'uuid': uuidsentinel.child_rp, 'in_tree': uuidsentinel.grandchild_rp, } ) self.assertEqual(1, len(rps)) self.assertEqual(uuidsentinel.child_rp, rps[0].uuid) # Ensure that the resources filter also continues to work properly with # the in_tree filter. Request resources that none of the providers # currently have and ensure no providers are returned rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'in_tree': uuidsentinel.grandchild_rp, 'resources': { 'VCPU': 200, } } ) self.assertEqual(0, len(rps)) # And now ask for one VCPU, which should only return us the grandchild rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'in_tree': uuidsentinel.grandchild_rp, 'resources': { 'VCPU': 1, } } ) self.assertEqual(1, len(rps)) self.assertEqual(uuidsentinel.grandchild_rp, rps[0].uuid) # Finally, verify we still get the grandchild if filtering on the # parent's UUID as in_tree rps = rp_obj.get_all_by_filters( self.ctx, filters={ 'in_tree': uuidsentinel.child_rp, 'resources': { 'VCPU': 1, } } ) self.assertEqual(1, len(rps)) self.assertEqual(uuidsentinel.grandchild_rp, rps[0].uuid) alloc_list = self.allocate_from_provider( grandchild_rp, orc.VCPU, 1) self.assertRaises(exception.CannotDeleteParentResourceProvider, root_rp.destroy) self.assertRaises(exception.CannotDeleteParentResourceProvider, child_rp.destroy) # Cannot delete provider if it has allocations self.assertRaises(exception.ResourceProviderInUse, grandchild_rp.destroy) # Now remove the allocations against the child and check that we can # now delete the child provider alloc_obj.delete_all(self.ctx, alloc_list) grandchild_rp.destroy() child_rp.destroy() root_rp.destroy() def test_has_provider_trees(self): """The _has_provider_trees() helper method should return False unless there is a resource provider that is a parent. """ self.assertFalse(res_ctx._has_provider_trees(self.ctx)) self._create_provider('cn') # No parents yet. Should still be False. self.assertFalse(res_ctx._has_provider_trees(self.ctx)) self._create_provider('numa0', parent=uuidsentinel.cn) # OK, now we've got a parent, so should be True self.assertTrue(res_ctx._has_provider_trees(self.ctx)) def test_destroy_resource_provider(self): created_resource_provider = self._create_provider( uuidsentinel.fake_resource_name, uuid=uuidsentinel.fake_resource_provider, ) created_resource_provider.destroy() self.assertRaises(exception.NotFound, rp_obj.ResourceProvider.get_by_uuid, self.ctx, uuidsentinel.fake_resource_provider) self.assertRaises(exception.NotFound, created_resource_provider.destroy) def test_destroy_foreign_key(self): """This tests bug #1739571.""" def emulate_rp_mysql_delete(func): def wrapped(context, _id): query = context.session.query(models.ResourceProvider) query = query.filter(models.ResourceProvider.id == _id) rp = query.first() self.assertIsNone(rp.root_provider_id) return func(context, _id) return wrapped emulated = emulate_rp_mysql_delete(rp_obj._delete_rp_record) rp = self._create_provider(uuidsentinel.fk) with mock.patch.object(rp_obj, '_delete_rp_record', emulated): rp.destroy() def test_destroy_allocated_resource_provider_fails(self): rp, allocation = self._make_allocation(tb.DISK_INVENTORY, tb.DISK_ALLOCATION) self.assertRaises(exception.ResourceProviderInUse, rp.destroy) def test_destroy_resource_provider_destroy_inventory(self): resource_provider = self._create_provider( uuidsentinel.fake_resource_name, uuid=uuidsentinel.fake_resource_provider, ) tb.add_inventory(resource_provider, tb.DISK_INVENTORY['resource_class'], tb.DISK_INVENTORY['total']) inventories = inv_obj.get_all_by_resource_provider( self.ctx, resource_provider) self.assertEqual(1, len(inventories)) resource_provider.destroy() inventories = inv_obj.get_all_by_resource_provider( self.ctx, resource_provider) self.assertEqual(0, len(inventories)) def test_destroy_with_traits(self): """Test deleting a resource provider that has a trait successfully. """ rp = self._create_provider('fake_rp1', uuid=uuidsentinel.fake_rp1) custom_trait = 'CUSTOM_TRAIT_1' tb.set_traits(rp, custom_trait) trl = trait_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(1, len(trl)) # Delete a resource provider that has a trait association. rp.destroy() # Assert the record has been deleted # in 'resource_provider_traits' table # after Resource Provider object has been destroyed. trl = trait_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(0, len(trl)) # Assert that NotFound exception is raised. self.assertRaises(exception.NotFound, rp_obj.ResourceProvider.get_by_uuid, self.ctx, uuidsentinel.fake_rp1) def test_set_traits_for_resource_provider(self): rp = self._create_provider('fake_resource_provider') generation = rp.generation self.assertIsInstance(rp.id, int) trait_names = ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B', 'CUSTOM_TRAIT_C'] tb.set_traits(rp, *trait_names) rp_traits = trait_obj.get_all_by_resource_provider(self.ctx, rp) self._assert_traits(trait_names, rp_traits) self.assertEqual(rp.generation, generation + 1) generation = rp.generation trait_names.remove('CUSTOM_TRAIT_A') updated_traits = trait_obj.get_all( self.ctx, filters={'name_in': trait_names}) self._assert_traits(trait_names, updated_traits) tb.set_traits(rp, *trait_names) rp_traits = trait_obj.get_all_by_resource_provider(self.ctx, rp) self._assert_traits(trait_names, rp_traits) self.assertEqual(rp.generation, generation + 1) def test_set_traits_for_correct_resource_provider(self): """This test creates two ResourceProviders, and attaches same trait to both of them. Then detaching the trait from one of them, and ensure the trait still associated with another one. """ # Create two ResourceProviders rp1 = self._create_provider('fake_resource_provider1') rp2 = self._create_provider('fake_resource_provider2') tname = 'CUSTOM_TRAIT_A' # Associate the trait with two ResourceProviders tb.set_traits(rp1, tname) tb.set_traits(rp2, tname) # Ensure the association rp1_traits = trait_obj.get_all_by_resource_provider(self.ctx, rp1) rp2_traits = trait_obj.get_all_by_resource_provider(self.ctx, rp2) self._assert_traits([tname], rp1_traits) self._assert_traits([tname], rp2_traits) # Detach the trait from one of ResourceProvider, and ensure the # trait association with another ResourceProvider still exists. tb.set_traits(rp1) rp1_traits = trait_obj.get_all_by_resource_provider(self.ctx, rp1) rp2_traits = trait_obj.get_all_by_resource_provider(self.ctx, rp2) self._assert_traits([], rp1_traits) self._assert_traits([tname], rp2_traits) def test_set_inventory_unknown_resource_class(self): """Test attempting to set inventory to an unknown resource class raises an exception. """ rp = self._create_provider('compute-host') inv = inv_obj.Inventory( rp._context, resource_provider=rp, resource_class='UNKNOWN', total=1024, reserved=15, min_unit=10, max_unit=100, step_size=10, allocation_ratio=1.0) self.assertRaises( exception.ResourceClassNotFound, rp.add_inventory, inv) def test_set_inventory_fail_in_use(self): """Test attempting to set inventory which would result in removing an inventory record for a resource class that still has allocations against it. """ rp = self._create_provider('compute-host') tb.add_inventory(rp, 'VCPU', 12) self.allocate_from_provider(rp, 'VCPU', 1) inv = inv_obj.Inventory( resource_provider=rp, resource_class='MEMORY_MB', total=1024, reserved=0, min_unit=256, max_unit=1024, step_size=256, allocation_ratio=1.0, ) self.assertRaises(exception.InventoryInUse, rp.set_inventory, [inv]) @mock.patch('placement.objects.resource_provider.LOG') def test_set_inventory_over_capacity(self, mock_log): rp = self._create_provider(uuidsentinel.rp_name) disk_inv = tb.add_inventory(rp, orc.DISK_GB, 2048, reserved=15, min_unit=10, max_unit=600, step_size=10) vcpu_inv = tb.add_inventory(rp, orc.VCPU, 12, allocation_ratio=16.0) self.assertFalse(mock_log.warning.called) # Allocate something reasonable for the above inventory self.allocate_from_provider(rp, 'DISK_GB', 500) # Update our inventory to over-subscribe us after the above allocation disk_inv.total = 400 rp.set_inventory([disk_inv, vcpu_inv]) # We should succeed, but have logged a warning for going over on disk mock_log.warning.assert_called_once_with( mock.ANY, {'uuid': rp.uuid, 'resource': 'DISK_GB'}) def test_provider_modify_inventory(self): rp = self._create_provider(uuidsentinel.rp_name) saved_generation = rp.generation disk_inv = tb.add_inventory(rp, orc.DISK_GB, 1024, reserved=15, min_unit=10, max_unit=100, step_size=10) vcpu_inv = tb.add_inventory(rp, orc.VCPU, 12, allocation_ratio=16.0) # generation has bumped once for each add self.assertEqual(saved_generation + 2, rp.generation) saved_generation = rp.generation new_inv_list = inv_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(2, len(new_inv_list)) resource_classes = [inv.resource_class for inv in new_inv_list] self.assertIn(orc.VCPU, resource_classes) self.assertIn(orc.DISK_GB, resource_classes) # reset inventory to just disk_inv rp.set_inventory([disk_inv]) # generation has bumped self.assertEqual(saved_generation + 1, rp.generation) saved_generation = rp.generation new_inv_list = inv_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(1, len(new_inv_list)) resource_classes = [inv.resource_class for inv in new_inv_list] self.assertNotIn(orc.VCPU, resource_classes) self.assertIn(orc.DISK_GB, resource_classes) self.assertEqual(1024, new_inv_list[0].total) # update existing disk inv to new settings disk_inv = inv_obj.Inventory( resource_provider=rp, resource_class=orc.DISK_GB, total=2048, reserved=15, min_unit=10, max_unit=100, step_size=10, allocation_ratio=1.0) rp.update_inventory(disk_inv) # generation has bumped self.assertEqual(saved_generation + 1, rp.generation) saved_generation = rp.generation new_inv_list = inv_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(1, len(new_inv_list)) self.assertEqual(2048, new_inv_list[0].total) # delete inventory rp.delete_inventory(orc.DISK_GB) # generation has bumped self.assertEqual(saved_generation + 1, rp.generation) saved_generation = rp.generation new_inv_list = inv_obj.get_all_by_resource_provider(self.ctx, rp) result = inv_obj.find(new_inv_list, orc.DISK_GB) self.assertIsNone(result) self.assertRaises(exception.NotFound, rp.delete_inventory, orc.DISK_GB) # check inventory list is empty inv_list = inv_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(0, len(inv_list)) # add some inventory rp.add_inventory(vcpu_inv) inv_list = inv_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(1, len(inv_list)) # generation has bumped self.assertEqual(saved_generation + 1, rp.generation) saved_generation = rp.generation # add same inventory again self.assertRaises(db_exc.DBDuplicateEntry, rp.add_inventory, vcpu_inv) # generation has not bumped self.assertEqual(saved_generation, rp.generation) # fail when generation wrong rp.generation = rp.generation - 1 self.assertRaises(exception.ConcurrentUpdateDetected, rp.set_inventory, inv_list) def test_delete_inventory_not_found(self): rp = self._create_provider(uuidsentinel.rp_name) error = self.assertRaises(exception.NotFound, rp.delete_inventory, 'DISK_GB') self.assertIn('No inventory of class DISK_GB found for delete', str(error)) def test_delete_inventory_with_allocation(self): rp, allocation = self._make_allocation(tb.DISK_INVENTORY, tb.DISK_ALLOCATION) error = self.assertRaises(exception.InventoryInUse, rp.delete_inventory, 'DISK_GB') self.assertIn( "Inventory for 'DISK_GB' on resource provider '%s' in use" % rp.uuid, str(error)) def test_update_inventory_not_found(self): rp = self._create_provider(uuidsentinel.rp_name) disk_inv = inv_obj.Inventory(resource_provider=rp, resource_class='DISK_GB', total=2048) error = self.assertRaises(exception.NotFound, rp.update_inventory, disk_inv) self.assertIn('No inventory of class DISK_GB found', str(error)) @mock.patch('placement.objects.resource_provider.LOG') def test_update_inventory_violates_allocation(self, mock_log): # Compute nodes that are reconfigured have to be able to set # their inventory to something that violates allocations so # we need to make that possible. rp, allocation = self._make_allocation(tb.DISK_INVENTORY, tb.DISK_ALLOCATION) # attempt to set inventory to less than currently allocated # amounts new_total = 1 disk_inv = inv_obj.Inventory( resource_provider=rp, resource_class=orc.DISK_GB, total=new_total) rp.update_inventory(disk_inv) usages = usage_obj.get_all_by_resource_provider_uuid( self.ctx, rp.uuid) self.assertEqual(allocation.used, usages[0].usage) inv_list = inv_obj.get_all_by_resource_provider(self.ctx, rp) self.assertEqual(new_total, inv_list[0].total) mock_log.warning.assert_called_once_with( mock.ANY, {'uuid': rp.uuid, 'resource': 'DISK_GB'}) def test_add_allocation_increments_generation(self): rp = self._create_provider(name='foo') tb.add_inventory(rp, tb.DISK_INVENTORY['resource_class'], tb.DISK_INVENTORY['total']) expected_gen = rp.generation + 1 self.allocate_from_provider(rp, tb.DISK_ALLOCATION['resource_class'], tb.DISK_ALLOCATION['used']) self.assertEqual(expected_gen, rp.generation) def test_get_all_by_resource_provider_multiple_providers(self): rp1 = self._create_provider('cn1') rp2 = self._create_provider(name='cn2') for rp in (rp1, rp2): tb.add_inventory(rp, tb.DISK_INVENTORY['resource_class'], tb.DISK_INVENTORY['total']) tb.add_inventory(rp, orc.IPV4_ADDRESS, 10, max_unit=2) # Get inventories for the first resource provider and validate # the inventory records have a matching resource provider got_inv = inv_obj.get_all_by_resource_provider(self.ctx, rp1) for inv in got_inv: self.assertEqual(rp1.id, inv.resource_provider.id) class ResourceProviderListTestCase(tb.PlacementDbBaseTestCase): def _run_get_all_by_filters(self, expected_rps, filters=None): '''Helper function to validate get_all_by_filters()''' resource_providers = rp_obj.get_all_by_filters(self.ctx, filters=filters) self.assertEqual(len(expected_rps), len(resource_providers)) rp_names = set([rp.name for rp in resource_providers]) self.assertEqual(set(expected_rps), rp_names) return resource_providers def test_get_all_by_filters(self): for rp_i in ['1', '2']: self._create_provider('rp_' + rp_i) expected_rps = ['rp_1', 'rp_2'] self._run_get_all_by_filters(expected_rps) filters = {'name': 'rp_1'} expected_rps = ['rp_1'] self._run_get_all_by_filters(expected_rps, filters=filters) filters = {'uuid': uuidsentinel.rp_2} expected_rps = ['rp_2'] self._run_get_all_by_filters(expected_rps, filters=filters) def test_get_all_by_filters_with_resources(self): for rp_i in ['1', '2']: rp = self._create_provider('rp_' + rp_i) tb.add_inventory(rp, orc.VCPU, 2) tb.add_inventory(rp, orc.DISK_GB, 1024, reserved=2) # Write a specific inventory for testing min/max units and steps tb.add_inventory(rp, orc.MEMORY_MB, 1024, reserved=2, min_unit=2, max_unit=4, step_size=2) # Create the VCPU allocation only for the first RP if rp_i != '1': continue self.allocate_from_provider(rp, orc.VCPU, used=1) # Both RPs should accept that request given the only current allocation # for the first RP is leaving one VCPU filters = {'resources': {orc.VCPU: 1}} expected_rps = ['rp_1', 'rp_2'] self._run_get_all_by_filters(expected_rps, filters=filters) # Now, when asking for 2 VCPUs, only the second RP should accept that # given the current allocation for the first RP filters = {'resources': {orc.VCPU: 2}} expected_rps = ['rp_2'] self._run_get_all_by_filters(expected_rps, filters=filters) # Adding a second resource request should be okay for the 2nd RP # given it has enough disk but we also need to make sure that the # first RP is not acceptable because of the VCPU request filters = {'resources': {orc.VCPU: 2, orc.DISK_GB: 1022}} expected_rps = ['rp_2'] self._run_get_all_by_filters(expected_rps, filters=filters) # Now, we are asking for both disk and VCPU resources that all the RPs # can't accept (as the 2nd RP is having a reserved size) filters = {'resources': {orc.VCPU: 2, orc.DISK_GB: 1024}} expected_rps = [] self._run_get_all_by_filters(expected_rps, filters=filters) # We also want to verify that asking for a specific RP can also be # checking the resource usage. filters = {'name': 'rp_1', 'resources': {orc.VCPU: 1}} expected_rps = ['rp_1'] self._run_get_all_by_filters(expected_rps, filters=filters) # Let's verify that the min and max units are checked too # Case 1: amount is in between min and max and modulo step_size filters = {'resources': {orc.MEMORY_MB: 2}} expected_rps = ['rp_1', 'rp_2'] self._run_get_all_by_filters(expected_rps, filters=filters) # Case 2: amount is less than min_unit filters = {'resources': {orc.MEMORY_MB: 1}} expected_rps = [] self._run_get_all_by_filters(expected_rps, filters=filters) # Case 3: amount is more than min_unit filters = {'resources': {orc.MEMORY_MB: 5}} expected_rps = [] self._run_get_all_by_filters(expected_rps, filters=filters) # Case 4: amount is not modulo step_size filters = {'resources': {orc.MEMORY_MB: 3}} expected_rps = [] self._run_get_all_by_filters(expected_rps, filters=filters) def test_get_all_by_filters_with_resources_not_existing(self): self.assertRaises( exception.ResourceClassNotFound, rp_obj.get_all_by_filters, self.ctx, {'resources': {'FOOBAR': 3}}) def test_get_all_by_filters_aggregate(self): for rp_i in [1, 2, 3, 4]: aggs = [uuidsentinel.agg_a, uuidsentinel.agg_b] if rp_i % 2 else [] self._create_provider('rp_' + str(rp_i), *aggs) for rp_i in [5, 6]: aggs = [uuidsentinel.agg_b, uuidsentinel.agg_c] self._create_provider('rp_' + str(rp_i), *aggs) # Get rps in "agg_a" filters = {'member_of': [[uuidsentinel.agg_a]]} expected_rps = ['rp_1', 'rp_3'] self._run_get_all_by_filters(expected_rps, filters=filters) # Validate rps in "agg_a" or "agg_b" filters = {'member_of': [[uuidsentinel.agg_a, uuidsentinel.agg_b]]} expected_rps = ['rp_1', 'rp_3', 'rp_5', 'rp_6'] self._run_get_all_by_filters(expected_rps, filters=filters) # Validate rps in "agg_a" or "agg_b" and named "rp_1" filters = {'member_of': [[uuidsentinel.agg_a, uuidsentinel.agg_b]], 'name': 'rp_1'} expected_rps = ['rp_1'] self._run_get_all_by_filters(expected_rps, filters=filters) # Validate rps in "agg_a" or "agg_b" and named "barnabas" filters = {'member_of': [[uuidsentinel.agg_a, uuidsentinel.agg_b]], 'name': 'barnabas'} expected_rps = [] self._run_get_all_by_filters(expected_rps, filters=filters) # Validate rps in "agg_1" or "agg_2" filters = {'member_of': [[uuidsentinel.agg_1, uuidsentinel.agg_2]]} expected_rps = [] self._run_get_all_by_filters(expected_rps, filters=filters) # Validate rps NOT in "agg_a" filters = {'forbidden_aggs': [uuidsentinel.agg_a]} expected_rps = ['rp_2', 'rp_4', 'rp_5', 'rp_6'] self._run_get_all_by_filters(expected_rps, filters=filters) # Validate rps NOT in "agg_1" filters = {'forbidden_aggs': [uuidsentinel.agg_1]} expected_rps = ['rp_1', 'rp_2', 'rp_3', 'rp_4', 'rp_5', 'rp_6'] self._run_get_all_by_filters(expected_rps, filters=filters) # Validate rps in "agg_a" or "agg_b" that are not in "agg_1" filters = {'member_of': [[uuidsentinel.agg_a, uuidsentinel.agg_b]], 'forbidden_aggs': [uuidsentinel.agg_1]} expected_rps = ['rp_1', 'rp_3', 'rp_5', 'rp_6'] self._run_get_all_by_filters(expected_rps, filters=filters) # Validate rps in "agg_a" or "agg_b" that are not in "agg_a" # ...which means rps in "agg_b" filters = {'member_of': [[uuidsentinel.agg_a, uuidsentinel.agg_b]], 'forbidden_aggs': [uuidsentinel.agg_a]} expected_rps = ['rp_5', 'rp_6'] self._run_get_all_by_filters(expected_rps, filters=filters) # Validate rps in both "agg_a" and "agg_b" that are not in "agg_a" # ...which means no rp filters = {'member_of': [[uuidsentinel.agg_a], [uuidsentinel.agg_b]], 'forbidden_aggs': [uuidsentinel.agg_a]} expected_rps = [] self._run_get_all_by_filters(expected_rps, filters=filters) def test_get_all_by_required(self): # Create some resource providers and give them each 0 or more traits. # rp_name_0: no traits # rp_name_1: CUSTOM_TRAIT_A # rp_name_2: CUSTOM_TRAIT_A, CUSTOM_TRAIT_B # rp_name_3: CUSTOM_TRAIT_A, CUSTOM_TRAIT_B, CUSTOM_TRAIT_C trait_names = ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B', 'CUSTOM_TRAIT_C'] for rp_i in [0, 1, 2, 3]: rp = self._create_provider('rp_' + str(rp_i)) if rp_i: traits = trait_names[0:rp_i] tb.set_traits(rp, *traits) # Three rps (1, 2, 3) should have CUSTOM_TRAIT_A filters = {'required_traits': [{'CUSTOM_TRAIT_A'}]} expected_rps = ['rp_1', 'rp_2', 'rp_3'] self._run_get_all_by_filters(expected_rps, filters=filters) # One rp (rp 1) if we forbid CUSTOM_TRAIT_B, with a single trait of # CUSTOM_TRAIT_A filters = { 'required_traits': [{'CUSTOM_TRAIT_A'}], 'forbidden_traits': {'CUSTOM_TRAIT_B'}, } expected_rps = ['rp_1'] custom_a_rps = self._run_get_all_by_filters(expected_rps, filters=filters) self.assertEqual(uuidsentinel.rp_1, custom_a_rps[0].uuid) traits = trait_obj.get_all_by_resource_provider( self.ctx, custom_a_rps[0]) self.assertEqual(1, len(traits)) self.assertEqual('CUSTOM_TRAIT_A', traits[0].name) # (A or B) and not C filters = { 'required_traits': [{'CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B'}], 'forbidden_traits': {'CUSTOM_TRAIT_C'}, } expected_rps = ['rp_1', 'rp_2'] self._run_get_all_by_filters(expected_rps, filters=filters) # A and (B or C) filters = { 'required_traits': [ {'CUSTOM_TRAIT_A'}, {'CUSTOM_TRAIT_B', 'CUSTOM_TRAIT_C'}], } expected_rps = ['rp_2', 'rp_3'] self._run_get_all_by_filters(expected_rps, filters=filters) class TestResourceProviderAggregates(tb.PlacementDbBaseTestCase): def test_set_and_get_new_aggregates(self): aggregate_uuids = [uuidsentinel.agg_a, uuidsentinel.agg_b] rp = self._create_provider( uuidsentinel.rp_name, *aggregate_uuids, uuid=uuidsentinel.rp_uuid ) read_aggregate_uuids = rp.get_aggregates() self.assertCountEqual(aggregate_uuids, read_aggregate_uuids) # Since get_aggregates always does a new query this is # mostly nonsense but is here for completeness. read_rp = rp_obj.ResourceProvider.get_by_uuid( self.ctx, uuidsentinel.rp_uuid) re_read_aggregate_uuids = read_rp.get_aggregates() self.assertCountEqual(aggregate_uuids, re_read_aggregate_uuids) def test_set_aggregates_is_replace(self): start_aggregate_uuids = [uuidsentinel.agg_a, uuidsentinel.agg_b] rp = self._create_provider( uuidsentinel.rp_name, *start_aggregate_uuids, uuid=uuidsentinel.rp_uuid ) read_aggregate_uuids = rp.get_aggregates() self.assertCountEqual(start_aggregate_uuids, read_aggregate_uuids) rp.set_aggregates([uuidsentinel.agg_a]) read_aggregate_uuids = rp.get_aggregates() self.assertNotIn(uuidsentinel.agg_b, read_aggregate_uuids) self.assertIn(uuidsentinel.agg_a, read_aggregate_uuids) # Empty list means delete. rp.set_aggregates([]) read_aggregate_uuids = rp.get_aggregates() self.assertEqual([], read_aggregate_uuids) def test_delete_rp_clears_aggs(self): start_aggregate_uuids = [uuidsentinel.agg_a, uuidsentinel.agg_b] rp = self._create_provider( uuidsentinel.rp_name, *start_aggregate_uuids, uuid=uuidsentinel.rp_uuid ) aggs = rp.get_aggregates() self.assertEqual(2, len(aggs)) rp.destroy() aggs = rp.get_aggregates() self.assertEqual(0, len(aggs)) def test_anchors_for_sharing_providers(self): """Test anchors_for_sharing_providers with the following setup. .............agg2..... : : : +====+ : +====+ ..agg5.. : | r1 | .| r2 | : +----+ : : +=+==+ +=+==+ +----+ : | s3 | : : | | | s2 | : +----+ : : +=+==+ agg1 +=+==+ +----+ ........ : | c1 |..... | c2 | : : +====+ : : +====+ agg4 +----+ : : : : : | s4 | : +----+ +----+ : +====+ +----+ :....| s5 | | s1 |.......agg3......| r3 | : +----+ +----+ +====+ :.........agg2...: """ agg1 = uuidsentinel.agg1 agg2 = uuidsentinel.agg2 agg3 = uuidsentinel.agg3 agg4 = uuidsentinel.agg4 agg5 = uuidsentinel.agg5 shr_trait = trait_obj.Trait.get_by_name( self.ctx, "MISC_SHARES_VIA_AGGREGATE") def mkrp(name, sharing, aggs, **kwargs): rp = self._create_provider(name, *aggs, **kwargs) if sharing: rp.set_traits([shr_trait]) rp.set_aggregates(aggs) return rp def _anchor(shr, anc): return res_ctx.AnchorIds( rp_id=shr.id, rp_uuid=shr.uuid, anchor_id=anc.id, anchor_uuid=anc.uuid) # r1 and c1 constitute a tree. The child is in agg1. We use this to # show that, when we ask for anchors for s1 (a member of agg1), we get # the *root* of the tree, not the aggregate member itself (c1). r1 = mkrp('r1', False, []) mkrp('c1', False, [agg1], parent=r1.uuid) # r2 and c2 constitute a tree. The root is in agg2; the child is in # agg3. We use this to show that, when we ask for anchors for a # provider that's in both of those aggregates (s1), we only get r2 once r2 = mkrp('r2', False, [agg2]) mkrp('c2', False, [agg3], parent=r2.uuid) # r3 stands alone, but is a member of two aggregates. We use this to # show that we don't "jump aggregates" - when we ask for anchors for s2 # we only get r3 (and s2 itself). r3 = mkrp('r3', False, [agg3, agg4]) # s* are sharing providers s1 = mkrp('s1', True, [agg1, agg2, agg3]) s2 = mkrp('s2', True, [agg4]) # s3 is the only member of agg5. We use this to show that the provider # is still considered its own root, even if the aggregate is only # associated with itself. s3 = mkrp('s3', True, [agg5]) # s4 is a broken semi-sharing provider - has MISC_SHARES_VIA_AGGREGATE, # but is not a member of an aggregate. It has no "anchor". s4 = mkrp('s4', True, []) # s5 is a sharing provider whose aggregates overlap with those of s1. # s5 and s1 will show up as "anchors" for each other. s5 = mkrp('s5', True, [agg1, agg2]) # s1 gets s1 (self), # r1 via agg1 through c1, # r2 via agg2 AND via agg3 through c2 # r3 via agg3 # s5 via agg1 and agg2 expected = set(_anchor(s1, rp) for rp in (s1, r1, r2, r3, s5)) self.assertCountEqual( expected, res_ctx.anchors_for_sharing_providers(self.ctx, [s1.id])) # s2 gets s2 (self) and r3 via agg4 expected = set(_anchor(s2, rp) for rp in (s2, r3)) self.assertCountEqual( expected, res_ctx.anchors_for_sharing_providers(self.ctx, [s2.id])) # s3 gets self self.assertEqual( set([_anchor(s3, s3)]), res_ctx.anchors_for_sharing_providers(self.ctx, [s3.id])) # s4 isn't really a sharing provider - gets nothing self.assertEqual( set([]), res_ctx.anchors_for_sharing_providers(self.ctx, [s4.id])) # s5 gets s5 (self), # r1 via agg1 through c1, # r2 via agg2 # s1 via agg1 and agg2 expected = set(_anchor(s5, rp) for rp in (s5, r1, r2, s1)) self.assertCountEqual( expected, res_ctx.anchors_for_sharing_providers(self.ctx, [s5.id])) # validate that we can get them all at once expected = set( [_anchor(s1, rp) for rp in (r1, r2, r3, s1, s5)] + [_anchor(s2, rp) for rp in (r3, s2)] + [_anchor(s3, rp) for rp in (s3,)] + [_anchor(s5, rp) for rp in (r1, r2, s1, s5)] ) self.assertCountEqual( expected, res_ctx.anchors_for_sharing_providers( self.ctx, [s1.id, s2.id, s3.id, s4.id, s5.id])) class SharedProviderTestCase(tb.PlacementDbBaseTestCase): """Tests that the queries used to determine placement in deployments with shared resource providers such as a shared disk pool result in accurate reporting of inventory and usage. """ def _requested_resources(self): STANDARDS = orc.STANDARDS VCPU_ID = STANDARDS.index(orc.VCPU) MEMORY_MB_ID = STANDARDS.index(orc.MEMORY_MB) DISK_GB_ID = STANDARDS.index(orc.DISK_GB) # The resources we will request resources = { VCPU_ID: 1, MEMORY_MB_ID: 64, DISK_GB_ID: 100, } return resources def test_shared_provider_capacity(self): """Sets up a resource provider that shares DISK_GB inventory via an aggregate, a couple resource providers representing "local disk" compute nodes and ensures the _get_providers_sharing_capacity() function finds that provider and not providers of "local disk". """ # Create the two "local disk" compute node providers cn1 = self._create_provider('cn1') cn2 = self._create_provider('cn2') # Populate the two compute node providers with inventory. One has # DISK_GB. Both should be excluded from the result (one doesn't have # the requested resource; but neither is a sharing provider). for cn in (cn1, cn2): tb.add_inventory(cn, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(cn, orc.MEMORY_MB, 32768, min_unit=64, max_unit=32768, step_size=64, allocation_ratio=1.5) if cn is cn1: tb.add_inventory(cn, orc.DISK_GB, 2000, min_unit=100, max_unit=2000, step_size=10) # Create the shared storage pool ss1 = self._create_provider('shared storage 1') ss2 = self._create_provider('shared storage 2') # Give the shared storage pool some inventory of DISK_GB for ss, disk_amount in ((ss1, 2000), (ss2, 1000)): tb.add_inventory(ss, orc.DISK_GB, disk_amount, min_unit=100, max_unit=2000, step_size=10) # Mark the shared storage pool as having inventory shared among # any provider associated via aggregate tb.set_traits(ss, "MISC_SHARES_VIA_AGGREGATE") # OK, now that has all been set up, let's verify that we get the ID of # the shared storage pool got_ids = res_ctx.get_sharing_providers(self.ctx) self.assertEqual(set([ss1.id, ss2.id]), got_ids) request = placement_lib.RequestGroup( use_same_provider=False, resources={orc.VCPU: 2, orc.MEMORY_MB: 256, orc.DISK_GB: 1500}) has_trees = res_ctx._has_provider_trees(self.ctx) sharing = res_ctx.get_sharing_providers(self.ctx) rg_ctx = res_ctx.RequestGroupSearchContext( self.ctx, request, has_trees, sharing) VCPU_ID = orc.STANDARDS.index(orc.VCPU) DISK_GB_ID = orc.STANDARDS.index(orc.DISK_GB) rps_sharing_vcpu = rg_ctx.get_rps_with_shared_capacity(VCPU_ID) self.assertEqual(set(), rps_sharing_vcpu) rps_sharing_dist = rg_ctx.get_rps_with_shared_capacity(DISK_GB_ID) self.assertEqual(set([ss1.id]), rps_sharing_dist) # We don't want to waste time sleeping in these tests. It would add # tens of seconds. @mock.patch('time.sleep', return_value=None) class TestEnsureAggregateRetry(tb.PlacementDbBaseTestCase): @mock.patch('placement.objects.resource_provider._ensure_aggregate') def test_retry_happens(self, mock_ens_agg, mock_time): """Confirm that retrying on DBDuplicateEntry happens when ensuring aggregates. """ rp = self._create_provider('rp1') agg_id = self.create_aggregate(uuidsentinel.agg) mock_ens_agg.side_effect = [db_exc.DBDuplicateEntry(), agg_id] rp.set_aggregates([uuidsentinel.agg]) self.assertEqual([uuidsentinel.agg], rp.get_aggregates()) self.assertEqual(2, mock_ens_agg.call_count) @mock.patch('placement.objects.resource_provider._ensure_aggregate') def test_retry_failsover(self, mock_ens_agg, mock_time): """Confirm that the retry loop used when ensuring aggregates only retries 10 times. After that it lets DBDuplicateEntry raise. """ rp = self._create_provider('rp1') mock_ens_agg.side_effect = db_exc.DBDuplicateEntry() self.assertRaises( db_exc.DBDuplicateEntry, rp.set_aggregates, [uuidsentinel.agg]) self.assertEqual(11, mock_ens_agg.call_count) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_trait.py0000664000175000017500000001633200000000000026237 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os_traits from placement import exception from placement.objects import trait as trait_obj from placement.tests.functional.db import test_base as tb class TraitTestCase(tb.PlacementDbBaseTestCase): def test_provider_traits_empty_param(self): self.assertRaises(ValueError, trait_obj.get_traits_by_provider_tree, self.ctx, []) def test_trait_ids_from_names_empty_param(self): self.assertRaises(ValueError, trait_obj.ids_from_names, self.ctx, []) def test_trait_create(self): t = trait_obj.Trait(self.ctx) t.name = 'CUSTOM_TRAIT_A' t.create() self.assertIsNotNone(t.id) self.assertEqual(t.name, 'CUSTOM_TRAIT_A') def test_trait_create_with_id_set(self): t = trait_obj.Trait(self.ctx) t.name = 'CUSTOM_TRAIT_A' t.id = 1 self.assertRaises(exception.ObjectActionError, t.create) def test_trait_create_without_name_set(self): t = trait_obj.Trait(self.ctx) self.assertRaises(exception.ObjectActionError, t.create) def test_trait_create_duplicated_trait(self): trait = trait_obj.Trait(self.ctx) trait.name = 'CUSTOM_TRAIT_A' trait.create() tmp_trait = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_TRAIT_A') self.assertEqual('CUSTOM_TRAIT_A', tmp_trait.name) duplicated_trait = trait_obj.Trait(self.ctx) duplicated_trait.name = 'CUSTOM_TRAIT_A' self.assertRaises(exception.TraitExists, duplicated_trait.create) def test_trait_get(self): t = trait_obj.Trait(self.ctx) t.name = 'CUSTOM_TRAIT_A' t.create() t = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_TRAIT_A') self.assertEqual(t.name, 'CUSTOM_TRAIT_A') def test_trait_get_non_existed_trait(self): self.assertRaises( exception.TraitNotFound, trait_obj.Trait.get_by_name, self.ctx, 'CUSTOM_TRAIT_A') def test_bug_1760322(self): # Under bug # #1760322, if the first hit to the traits table resulted # in an exception, the sync transaction rolled back and the table # stayed empty; but _TRAITS_SYNCED got set to True, so it didn't resync # next time. # NOTE(cdent): With change Ic87518948ed5bf4ab79f9819cd94714e350ce265 # syncing is no longer done in the same way, so the bug fix that this # test was testing is gone, but this test has been left in place to # make sure we still get behavior we expect. try: trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_GOLD') except exception.TraitNotFound: pass # Under bug #1760322, this raised TraitNotFound. trait_obj.Trait.get_by_name(self.ctx, os_traits.HW_CPU_X86_AVX2) def test_trait_destroy(self): t = trait_obj.Trait(self.ctx) t.name = 'CUSTOM_TRAIT_A' t.create() t = trait_obj.Trait.get_by_name(self.ctx, 'CUSTOM_TRAIT_A') self.assertEqual(t.name, 'CUSTOM_TRAIT_A') t.destroy() self.assertRaises(exception.TraitNotFound, trait_obj.Trait.get_by_name, self.ctx, 'CUSTOM_TRAIT_A') def test_trait_destroy_with_standard_trait(self): t = trait_obj.Trait(self.ctx) t.id = 1 t.name = 'HW_CPU_X86_AVX' self.assertRaises(exception.TraitCannotDeleteStandard, t.destroy) def test_traits_get_all(self): trait_names = ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B', 'CUSTOM_TRAIT_C'] for name in trait_names: t = trait_obj.Trait(self.ctx) t.name = name t.create() self._assert_traits_in(trait_names, trait_obj.get_all(self.ctx)) def test_traits_get_all_with_name_in_filter(self): trait_names = ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B', 'CUSTOM_TRAIT_C'] for name in trait_names: t = trait_obj.Trait(self.ctx) t.name = name t.create() traits = trait_obj.get_all( self.ctx, filters={'name_in': ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B']}) self._assert_traits(['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B'], traits) def test_traits_get_all_with_non_existed_name(self): traits = trait_obj.get_all( self.ctx, filters={'name_in': ['CUSTOM_TRAIT_X', 'CUSTOM_TRAIT_Y']}) self.assertEqual(0, len(traits)) def test_traits_get_all_with_prefix_filter(self): trait_names = ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B', 'CUSTOM_TRAIT_C'] for name in trait_names: t = trait_obj.Trait(self.ctx) t.name = name t.create() traits = trait_obj.get_all(self.ctx, filters={'prefix': 'CUSTOM'}) self._assert_traits( ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B', 'CUSTOM_TRAIT_C'], traits) def test_traits_get_all_with_non_existed_prefix(self): traits = trait_obj.get_all(self.ctx, filters={"prefix": "NOT_EXISTED"}) self.assertEqual(0, len(traits)) def test_trait_delete_in_use(self): rp = self._create_provider('fake_resource_provider') t, = tb.set_traits(rp, 'CUSTOM_TRAIT_A') self.assertRaises(exception.TraitInUse, t.destroy) def test_traits_get_all_with_associated_true(self): rp1 = self._create_provider('fake_resource_provider1') rp2 = self._create_provider('fake_resource_provider2') trait_names = ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B', 'CUSTOM_TRAIT_C'] for name in trait_names: t = trait_obj.Trait(self.ctx) t.name = name t.create() associated_traits = trait_obj.get_all( self.ctx, filters={'name_in': ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B']}) rp1.set_traits(associated_traits) rp2.set_traits(associated_traits) self._assert_traits( ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B'], trait_obj.get_all(self.ctx, filters={'associated': True})) def test_traits_get_all_with_associated_false(self): rp1 = self._create_provider('fake_resource_provider1') rp2 = self._create_provider('fake_resource_provider2') trait_names = ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B', 'CUSTOM_TRAIT_C'] for name in trait_names: t = trait_obj.Trait(self.ctx) t.name = name t.create() associated_traits = trait_obj.get_all( self.ctx, filters={'name_in': ['CUSTOM_TRAIT_A', 'CUSTOM_TRAIT_B']}) rp1.set_traits(associated_traits) rp2.set_traits(associated_traits) self._assert_traits_in( ['CUSTOM_TRAIT_C'], trait_obj.get_all(self.ctx, filters={'associated': False})) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_usage.py0000664000175000017500000002033300000000000026214 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import os_resource_classes as orc from oslo_utils.fixture import uuidsentinel from oslo_utils import uuidutils from placement.objects import consumer as c_obj from placement.objects import consumer_type as ct_obj from placement.objects import inventory as inv_obj from placement.objects import usage as usage_obj from placement.tests.functional.db import test_base as tb class UsageListTestCase(tb.PlacementDbBaseTestCase): def test_get_all_null(self): for uuid in [uuidsentinel.rp_uuid_1, uuidsentinel.rp_uuid_2]: self._create_provider(uuid, uuid=uuid) usages = usage_obj.get_all_by_resource_provider_uuid( self.ctx, uuidsentinel.rp_uuid_1) self.assertEqual(0, len(usages)) def test_get_all_one_allocation(self): db_rp, _ = self._make_allocation(tb.DISK_INVENTORY, tb.DISK_ALLOCATION) inv = inv_obj.Inventory(resource_provider=db_rp, resource_class=orc.DISK_GB, total=1024) db_rp.set_inventory([inv]) usages = usage_obj.get_all_by_resource_provider_uuid( self.ctx, db_rp.uuid) self.assertEqual(1, len(usages)) self.assertEqual(2, usages[0].usage) self.assertEqual(orc.DISK_GB, usages[0].resource_class) def test_get_inventory_no_allocation(self): db_rp = self._create_provider('rp_no_inv') tb.add_inventory(db_rp, orc.DISK_GB, 1024) usages = usage_obj.get_all_by_resource_provider_uuid( self.ctx, db_rp.uuid) self.assertEqual(1, len(usages)) self.assertEqual(0, usages[0].usage) self.assertEqual(orc.DISK_GB, usages[0].resource_class) def test_get_all_multiple_inv(self): db_rp = self._create_provider('rp_no_inv') tb.add_inventory(db_rp, orc.DISK_GB, 1024) tb.add_inventory(db_rp, orc.VCPU, 24) usages = usage_obj.get_all_by_resource_provider_uuid( self.ctx, db_rp.uuid) self.assertEqual(2, len(usages)) def test_get_by_unspecified_consumer_type(self): # This will add a consumer with a NULL consumer type and the default # project and user external_ids self._make_allocation(tb.DISK_INVENTORY, tb.DISK_ALLOCATION) # Verify we filter the project external_id correctly. Note: this will # also work if filtering is broken (if it's not filtering at all) usages = usage_obj.get_by_consumer_type( self.ctx, self.project_obj.external_id) self.assertEqual(1, len(usages)) usage = usages[0] self.assertEqual('unknown', usage.consumer_type) self.assertEqual(1, usage.consumer_count) self.assertEqual(orc.DISK_GB, usage.resource_class) self.assertEqual(2, usage.usage) # Verify we get nothing back if we filter on a different project # external_id that does not exist (will not work if filtering is # broken) usages = usage_obj.get_by_consumer_type(self.ctx, 'BOGUS') self.assertEqual(0, len(usages)) def test_get_by_specified_consumer_type(self): ct = ct_obj.ConsumerType(self.ctx, name='INSTANCE') ct.create() consumer_id = uuidutils.generate_uuid() c = c_obj.Consumer(self.ctx, uuid=consumer_id, project=self.project_obj, user=self.user_obj, consumer_type_id=ct.id) c.create() # This will add a consumer with the consumer type INSTANCE # and the default project and user external_ids da = copy.deepcopy(tb.DISK_ALLOCATION) da['consumer_id'] = c.uuid self._make_allocation(tb.DISK_INVENTORY, da) # Verify we filter the INSTANCE type correctly. Note: this will also # work if filtering is broken (if it's not filtering at all) usages = usage_obj.get_by_consumer_type( self.ctx, self.project_obj.external_id, consumer_type=ct.name) self.assertEqual(1, len(usages)) usage = usages[0] self.assertEqual(ct.name, usage.consumer_type) self.assertEqual(1, usage.consumer_count) self.assertEqual(orc.DISK_GB, usage.resource_class) self.assertEqual(2, usage.usage) # Verify we get nothing back if we filter on a different consumer # type that does not exist (will not work if filtering is broken) usages = usage_obj.get_by_consumer_type( self.ctx, self.project_obj.external_id, consumer_type='BOGUS') self.assertEqual(0, len(usages)) def test_get_by_specified_consumer_type_with_user(self): ct = ct_obj.ConsumerType(self.ctx, name='INSTANCE') ct.create() consumer_id = uuidutils.generate_uuid() c = c_obj.Consumer(self.ctx, uuid=consumer_id, project=self.project_obj, user=self.user_obj, consumer_type_id=ct.id) c.create() # This will add a consumer with the consumer type INSTANCE # and the default project and user external_ids da = copy.deepcopy(tb.DISK_ALLOCATION) da['consumer_id'] = c.uuid db_rp, _ = self._make_allocation(tb.DISK_INVENTORY, da) # Verify we filter the user external_id correctly. Note: this will also # work if filtering is broken (if it's not filtering at all) usages = usage_obj.get_by_consumer_type( self.ctx, self.project_obj.external_id, user_id=self.user_obj.external_id, consumer_type=ct.name) self.assertEqual(1, len(usages)) usage = usages[0] self.assertEqual(ct.name, usage.consumer_type) self.assertEqual(1, usage.consumer_count) self.assertEqual(orc.DISK_GB, usage.resource_class) self.assertEqual(2, usage.usage) # Verify we get nothing back if we filter on a different user # external_id that does not exist (will not work if filtering is # broken) usages = usage_obj.get_by_consumer_type( self.ctx, self.project_obj.external_id, user_id='BOGUS', consumer_type=ct.name) self.assertEqual(0, len(usages)) def test_get_by_all_consumer_type(self): # This will add a consumer with the default consumer type UNKNOWN db_rp, _ = self._make_allocation(tb.DISK_INVENTORY, tb.DISK_ALLOCATION) # Make another allocation with a different consumer type ct = ct_obj.ConsumerType(self.ctx, name='FOO') ct.create() consumer_id = uuidutils.generate_uuid() c = c_obj.Consumer(self.ctx, uuid=consumer_id, project=self.project_obj, user=self.user_obj, consumer_type_id=ct.id) c.create() self.allocate_from_provider(db_rp, orc.DISK_GB, 2, consumer=c) # Verify we get usages back for both consumer types with 'all' usages = usage_obj.get_by_consumer_type( self.ctx, self.project_obj.external_id, consumer_type='all') self.assertEqual(1, len(usages)) usage = usages[0] self.assertEqual('all', usage.consumer_type) self.assertEqual(2, usage.consumer_count) self.assertEqual(orc.DISK_GB, usage.resource_class) self.assertEqual(4, usage.usage) def test_get_by_unused_consumer_type(self): # This will add a consumer with the default consumer type UNKNOWN self._make_allocation(tb.DISK_INVENTORY, tb.DISK_ALLOCATION) usages = usage_obj.get_by_consumer_type( self.ctx, self.project_obj.external_id, consumer_type='EMPTY') self.assertEqual(0, len(usages)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/db/test_user.py0000664000175000017500000000246200000000000026071 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from placement import exception from placement.objects import user as user_obj from placement.tests.functional.db import test_base as tb class UserTestCase(tb.PlacementDbBaseTestCase): def test_non_existing_user(self): self.assertRaises( exception.UserNotFound, user_obj.User.get_by_external_id, self.ctx, uuids.non_existing_user) def test_create_and_get(self): u = user_obj.User(self.ctx, external_id='another-user') u.create() u = user_obj.User.get_by_external_id(self.ctx, 'another-user') # User ID == 1 is fake-user created in setup self.assertEqual(2, u.id) self.assertRaises(exception.UserExists, u.create) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.264778 openstack_placement-13.0.0/placement/tests/functional/fixtures/0000775000175000017500000000000000000000000024762 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/fixtures/__init__.py0000664000175000017500000000000000000000000027061 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/fixtures/capture.py0000664000175000017500000000622600000000000027005 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import logging import warnings import fixtures from oslotest import log from sqlalchemy import exc as sqla_exc class NullHandler(logging.Handler): """custom default NullHandler to attempt to format the record. Used in conjunction with Logging below to detect formatting errors in debug logs. """ def handle(self, record): self.format(record) def emit(self, record): pass def createLock(self): self.lock = None class Logging(log.ConfigureLogging): """A logging fixture providing two important fixtures. One is to capture logs for later inspection. The other is to make sure that DEBUG logs, even if not captured, are formatted. """ def __init__(self): super(Logging, self).__init__() # If level was not otherwise set, default to INFO. if self.level is None: self.level = logging.INFO # Always capture logs, unlike the parent. self.capture_logs = True def setUp(self): super(Logging, self).setUp() if self.level > logging.DEBUG: handler = NullHandler() self.useFixture(fixtures.LogHandler(handler, nuke_handlers=False)) handler.setLevel(logging.DEBUG) class WarningsFixture(fixtures.Fixture): """Filter or escalates certain warnings during test runs. Add additional entries as required. Remove when obsolete. """ def setUp(self): super(WarningsFixture, self).setUp() self._original_warning_filters = warnings.filters[:] warnings.simplefilter("once", DeprecationWarning) # Ignore policy scope warnings. warnings.filterwarnings( 'ignore', message="Policy .* failed scope check", category=UserWarning) # The UUIDFields emits a warning if the value is not a valid UUID. # Let's escalate that to an exception in the test to prevent adding # violations. warnings.filterwarnings('error', message=".*invalid UUID.*") # Prevent us introducing unmapped columns warnings.filterwarnings( 'error', category=sqla_exc.SAWarning) # Configure SQLAlchemy warnings warnings.filterwarnings( 'ignore', category=sqla_exc.SADeprecationWarning) warnings.filterwarnings( 'error', module='placement', category=sqla_exc.SADeprecationWarning) self.addCleanup(self._reset_warning_filters) def _reset_warning_filters(self): warnings.filters[:] = self._original_warning_filters ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/fixtures/gabbits.py0000664000175000017500000013616500000000000026763 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from gabbi import fixture import os_resource_classes as orc import os_traits as ot from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_log.fixture import logging_error from oslo_policy import opts as policy_opts from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import uuidutils from oslotest import output from placement import conf from placement import context from placement import deploy from placement.objects import project as project_obj from placement.objects import resource_class as rc_obj from placement.objects import user as user_obj from placement import policies from placement.tests import fixtures from placement.tests.functional.db import test_base as tb from placement.tests.functional.fixtures import capture from placement.tests.unit import policy_fixture # This global conf is not a global oslo_config.cfg.CONF. It's a global # used locally to work around a limitation in the way that gabbi instantiates # the WSGI application being tested. CONF = None def setup_app(): global CONF return deploy.loadapp(CONF) class APIFixture(fixture.GabbiFixture): """Setup the required backend fixtures for a basic placement service.""" # TODO(stephenfin): Remove this once we drop the deprecated policy rules _secure_rbac = False def start_fixture(self): global CONF # Set up stderr and stdout captures by directly driving the # existing nova fixtures that do that. This captures the # output that happens outside individual tests (for # example database migrations). self.standard_logging_fixture = capture.Logging() self.standard_logging_fixture.setUp() self.output_stream_fixture = output.CaptureOutput() self.output_stream_fixture.setUp() self.logging_error_fixture = ( logging_error.get_logging_handle_error_fixture()) self.logging_error_fixture.setUp() # Filter ignorable warnings during test runs. self.warnings_fixture = capture.WarningsFixture() self.warnings_fixture.setUp() # Do not use global CONF self.conf_fixture = config_fixture.Config(cfg.ConfigOpts()) self.conf_fixture.setUp() conf.register_opts(self.conf_fixture.conf) self.conf_fixture.config(group='api', auth_strategy='noauth2') self.conf_fixture.config( group='oslo_policy', enforce_scope=self._secure_rbac, enforce_new_defaults=self._secure_rbac, ) self.placement_db_fixture = fixtures.Database( self.conf_fixture, set_config=True) self.placement_db_fixture.setUp() self.context = context.RequestContext() # Some database interaction methods require access to the oslo config # via the context. Within the WSGI application this is taken care of # but here in the fixtures we use some of those methods to create # entities. self.context.config = self.conf_fixture.conf # Set default policy opts, otherwise the deploy module can # NoSuchOptError. policy_opts.set_defaults(self.conf_fixture.conf) # Make sure default_config_files is an empty list, not None. # If None /etc/placement/placement.conf is read and confuses results. self.conf_fixture.conf([], default_config_files=[]) # Turn on a policy fixture. self.policy_fixture = policy_fixture.PolicyFixture( self.conf_fixture) self.policy_fixture.setUp() os.environ['RP_UUID'] = uuidutils.generate_uuid() os.environ['RP_NAME'] = uuidutils.generate_uuid() os.environ['RP_UUID1'] = uuidutils.generate_uuid() os.environ['RP_NAME1'] = uuidutils.generate_uuid() os.environ['RP_UUID2'] = uuidutils.generate_uuid() os.environ['RP_NAME2'] = uuidutils.generate_uuid() os.environ['CUSTOM_RES_CLASS'] = 'CUSTOM_IRON_NFV' os.environ['CUSTOM_RES_CLASS1'] = 'CUSTOM_IRON_NFV1' os.environ['CUSTOM_RES_CLASS2'] = 'CUSTOM_IRON_NFV2' os.environ['PROJECT_ID'] = uuidutils.generate_uuid() os.environ['ADMIN_PROJECT_ID'] = uuidutils.generate_uuid() os.environ['SERVICE_PROJECT_ID'] = uuidutils.generate_uuid() os.environ['USER_ID'] = uuidutils.generate_uuid() os.environ['PROJECT_ID_ALT'] = uuidutils.generate_uuid() os.environ['USER_ID_ALT'] = uuidutils.generate_uuid() os.environ['INSTANCE_UUID'] = uuidutils.generate_uuid() os.environ['MIGRATION_UUID'] = uuidutils.generate_uuid() os.environ['CONSUMER_UUID'] = uuidutils.generate_uuid() os.environ['PARENT_PROVIDER_UUID'] = uuidutils.generate_uuid() os.environ['ALT_PARENT_PROVIDER_UUID'] = uuidutils.generate_uuid() CONF = self.conf_fixture.conf def stop_fixture(self): global CONF self.placement_db_fixture.cleanUp() self.warnings_fixture.cleanUp() self.output_stream_fixture.cleanUp() self.standard_logging_fixture.cleanUp() self.logging_error_fixture.cleanUp() self.policy_fixture.cleanUp() self.conf_fixture.cleanUp() CONF = None class AllocationFixture(APIFixture): """An APIFixture that has some pre-made Allocations. +----- same user----+ alt_user | | | +----+----------+ +------+-----+ +-----+---------+ | consumer1 | | consumer2 | | alt_consumer | | DISK_GB:1000 | | VCPU: 6 | | VCPU: 1 | | | | | | DISK_GB:20 | +-------------+-+ +------+-----+ +-+-------------+ | | | +-+----------+---------+-+ | rp | | VCPU: 10 | | DISK_GB:2048 | +------------------------+ """ def start_fixture(self): super(AllocationFixture, self).start_fixture() # For use creating and querying allocations/usages os.environ['ALT_USER_ID'] = uuidutils.generate_uuid() project_id = os.environ['PROJECT_ID'] user_id = os.environ['USER_ID'] alt_user_id = os.environ['ALT_USER_ID'] user = user_obj.User(self.context, external_id=user_id) user.create() alt_user = user_obj.User(self.context, external_id=alt_user_id) alt_user.create() project = project_obj.Project(self.context, external_id=project_id) project.create() # Stealing from the super rp_name = os.environ['RP_NAME'] rp_uuid = os.environ['RP_UUID'] # Create the rp with VCPU and DISK_GB inventory rp = tb.create_provider(self.context, rp_name, uuid=rp_uuid) tb.add_inventory(rp, 'DISK_GB', 2048, step_size=10, min_unit=10, max_unit=1000) tb.add_inventory(rp, 'VCPU', 10, max_unit=10) # Create a first consumer for the DISK_GB allocations consumer1 = tb.ensure_consumer(self.context, user, project) tb.set_allocation(self.context, rp, consumer1, {'DISK_GB': 1000}) os.environ['CONSUMER_0'] = consumer1.uuid # Create a second consumer for the VCPU allocations consumer2 = tb.ensure_consumer(self.context, user, project) tb.set_allocation(self.context, rp, consumer2, {'VCPU': 6}) os.environ['CONSUMER_ID'] = consumer2.uuid # Create a consumer object for a different user alt_consumer = tb.ensure_consumer(self.context, alt_user, project) os.environ['ALT_CONSUMER_ID'] = alt_consumer.uuid # Create a couple of allocations for a different user. tb.set_allocation(self.context, rp, alt_consumer, {'DISK_GB': 20, 'VCPU': 1}) # The ALT_RP_XXX variables are for a resource provider that has # not been created in the Allocation fixture os.environ['ALT_RP_UUID'] = uuidutils.generate_uuid() os.environ['ALT_RP_NAME'] = uuidutils.generate_uuid() class SharedStorageFixture(APIFixture): """An APIFixture that has two compute nodes, one with local storage and one without, both associated by aggregate to two providers of shared storage. Both compute nodes have respectively two numa node resource providers, each of which has a pf resource provider. +-------------------------+ +-------------------------+ | sharing storage (ss) | | sharing storage (ss2) | | DISK_GB:2000 |----+---| DISK_GB:2000 | | traits: MISC_SHARES... | | | traits: MISC_SHARES... | +-------------------------+ | +-------------------------+ | aggregate +--------------------------+ | +------------------------+ | compute node (cn1) |---+---| compute node (cn2) | | CPU: 24 | | CPU: 24 | | MEMORY_MB: 128*1024 | | MEMORY_MB: 128*1024 | | traits: HW_CPU_X86_SSE, | | DISK_GB: 2000 | | HW_CPU_X86_SSE2 | | | +--------------------------+ +------------------------+ | | | | +---------+ +---------+ +---------+ +---------+ | numa1_1 | | numa1_2 | | numa2_1 | | numa2_2 | +---------+ +---------+ +---------+ +---------+ | | | | +---------------++---------------++---------------++----------------+ | pf1_1 || pf1_2 || pf2_1 || pf2_2 | | SRIOV_NET_VF:8|| SRIOV_NET_VF:8|| SRIOV_NET_VF:8|| SRIOV_NET_VF:8 | +---------------++---------------++---------------++----------------+ """ def start_fixture(self): super(SharedStorageFixture, self).start_fixture() agg_uuid = uuidutils.generate_uuid() cn1 = tb.create_provider(self.context, 'cn1', agg_uuid) cn2 = tb.create_provider(self.context, 'cn2', agg_uuid) ss = tb.create_provider(self.context, 'ss', agg_uuid) ss2 = tb.create_provider(self.context, 'ss2', agg_uuid) numa1_1 = tb.create_provider(self.context, 'numa1_1', parent=cn1.uuid) numa1_2 = tb.create_provider(self.context, 'numa1_2', parent=cn1.uuid) numa2_1 = tb.create_provider(self.context, 'numa2_1', parent=cn2.uuid) numa2_2 = tb.create_provider(self.context, 'numa2_2', parent=cn2.uuid) pf1_1 = tb.create_provider(self.context, 'pf1_1', parent=numa1_1.uuid) pf1_2 = tb.create_provider(self.context, 'pf1_2', parent=numa1_2.uuid) pf2_1 = tb.create_provider(self.context, 'pf2_1', parent=numa2_1.uuid) pf2_2 = tb.create_provider(self.context, 'pf2_2', parent=numa2_2.uuid) os.environ['AGG_UUID'] = agg_uuid os.environ['CN1_UUID'] = cn1.uuid os.environ['CN2_UUID'] = cn2.uuid os.environ['SS_UUID'] = ss.uuid os.environ['SS2_UUID'] = ss2.uuid os.environ['NUMA1_1_UUID'] = numa1_1.uuid os.environ['NUMA1_2_UUID'] = numa1_2.uuid os.environ['NUMA2_1_UUID'] = numa2_1.uuid os.environ['NUMA2_2_UUID'] = numa2_2.uuid os.environ['PF1_1_UUID'] = pf1_1.uuid os.environ['PF1_2_UUID'] = pf1_2.uuid os.environ['PF2_1_UUID'] = pf2_1.uuid os.environ['PF2_2_UUID'] = pf2_2.uuid # Populate compute node inventory for VCPU and RAM for cn in (cn1, cn2): tb.add_inventory(cn, orc.VCPU, 24, allocation_ratio=16.0) tb.add_inventory(cn, orc.MEMORY_MB, 128 * 1024, allocation_ratio=1.5) tb.set_traits(cn1, 'HW_CPU_X86_SSE', 'HW_CPU_X86_SSE2') tb.add_inventory(cn2, orc.DISK_GB, 2000, reserved=100, allocation_ratio=1.0) for shared in (ss, ss2): # Populate shared storage provider with DISK_GB inventory and # mark it shared among any provider associated via aggregate tb.add_inventory(shared, orc.DISK_GB, 2000, reserved=100, allocation_ratio=1.0) tb.set_traits(shared, 'MISC_SHARES_VIA_AGGREGATE') # Populate PF inventory for VF for pf in (pf1_1, pf1_2, pf2_1, pf2_2): tb.add_inventory(pf, orc.SRIOV_NET_VF, 8, allocation_ratio=1.0) class NUMAAggregateFixture(APIFixture): """An APIFixture that has two compute nodes without a resource themselves. They are associated by aggregate to a provider of shared storage and both compute nodes have two numa node resource providers with CPUs. One of the numa node is associated to another sharing storage by a different aggregate. +-----------------------+ | sharing storage (ss1) | | DISK_GB:2000 | | agg: [aggA] | +-----------+-----------+ | +---------------+----------------+ +---------------|--------------+ +--------------|--------------+ | +-------------+------------+ | | +------------+------------+ | | | compute node (cn1) | | | |compute node (cn2) | | | | agg: [aggA] | | | | agg: [aggA, aggB] | | | +-----+-------------+------+ | | +----+-------------+------+ | | | nested | nested | | | nested | nested | | +-----+------+ +----+------+ | | +----+------+ +----+------+ | | | numa1_1 | | numa1_2 | | | | numa2_1 | | numa2_2 | | | | CPU: 24 | | CPU: 24 | | | | CPU: 24 | | CPU: 24 | | | | agg:[aggC]| | | | | | | | | | | +-----+------+ +-----------+ | | +-----------+ +-----------+ | +-------|----------------------+ +-----------------------------+ | aggC +-----+-----------------+ | sharing storage (ss2) | | DISK_GB:2000 | | agg: [aggC] | +-----------------------+ """ def start_fixture(self): super(NUMAAggregateFixture, self).start_fixture() aggA_uuid = uuidutils.generate_uuid() aggB_uuid = uuidutils.generate_uuid() aggC_uuid = uuidutils.generate_uuid() cn1 = tb.create_provider(self.context, 'cn1', aggA_uuid) cn2 = tb.create_provider(self.context, 'cn2', aggA_uuid, aggB_uuid) ss1 = tb.create_provider(self.context, 'ss1', aggA_uuid) ss2 = tb.create_provider(self.context, 'ss2', aggC_uuid) numa1_1 = tb.create_provider( self.context, 'numa1_1', aggC_uuid, parent=cn1.uuid) numa1_2 = tb.create_provider(self.context, 'numa1_2', parent=cn1.uuid) numa2_1 = tb.create_provider(self.context, 'numa2_1', parent=cn2.uuid) numa2_2 = tb.create_provider(self.context, 'numa2_2', parent=cn2.uuid) os.environ['AGGA_UUID'] = aggA_uuid os.environ['AGGB_UUID'] = aggB_uuid os.environ['AGGC_UUID'] = aggC_uuid os.environ['CN1_UUID'] = cn1.uuid os.environ['CN2_UUID'] = cn2.uuid os.environ['SS1_UUID'] = ss1.uuid os.environ['SS2_UUID'] = ss2.uuid os.environ['NUMA1_1_UUID'] = numa1_1.uuid os.environ['NUMA1_2_UUID'] = numa1_2.uuid os.environ['NUMA2_1_UUID'] = numa2_1.uuid os.environ['NUMA2_2_UUID'] = numa2_2.uuid # Populate compute node inventory for VCPU and RAM for numa in (numa1_1, numa1_2, numa2_1, numa2_2): tb.add_inventory(numa, orc.VCPU, 24, allocation_ratio=16.0) # Populate shared storage provider with DISK_GB inventory and # mark it shared among any provider associated via aggregate for ss in (ss1, ss2): tb.add_inventory(ss, orc.DISK_GB, 2000, reserved=100, allocation_ratio=1.0) tb.set_traits(ss, 'MISC_SHARES_VIA_AGGREGATE') class NUMANetworkFixture(APIFixture): """An APIFixture representing compute hosts with characteristics such as: * A root compute node provider with no resources (VCPU/MEMORY_MB in NUMA providers, DISK_GB provided by a sharing provider). * NUMA nodes, providing VCPU and MEMORY_MB resources (with interesting min_unit and step_size values), decorated with the HW_NUMA_ROOT trait. * Each NUMA node is associated with some devices. * Two network agents, themselves devoid of resources, parenting different kinds of network devices. * A more "normal" compute node provider with VCPU/MEMORY_MB/DISK_GB resources. * Some NIC subtree roots, themselves devoid of resources, decorated with the HW_NIC_ROOT trait, parenting PF providers, on different physical networks, with VF resources. +-----------------------+ | sharing storage (ss1) | | MISC_SHARES_VIA_AGG.. | | DISK_GB:2000 |............(to cn2)......>> | agg: [aggA] | +-----------+-----------+ : +-----------------------------+ | compute node (cn1) | | COMPUTE_VOLUME_MULTI_ATTACH | | (no inventory) | | agg: [aggA] | +---------------+-------------+ | +--------------------+------+--------+-------------------+ | | | | +---------+--------+ +---------+--------+ | | | numa0 | | numa1 | | +-----------+------+ | HW_NUMA_ROOT | | HW_NUMA_ROOT, FOO| | | ovs_agent | | VCPU: 4 (2 used) | | VCPU: 4 | | | VNIC_TYPE_NORMAL | | MEMORY_MB: 2048 | | MEMORY_MB: 2048 | | +-----------+------+ | min_unit: 512 | | min_unit: 256 | | | | step_size: 256 | | max_unit: 1024 | | | +---+----------+---+ +---+----------+---+ +---+--------------+ | | | | | | sriov_agent | | +---+---+ +---+---+ +---+---+ +---+---+ | VNIC_TYPE_DIRECT | | |fpga0 | |pgpu0 | |fpga1_0| |fpga1_1| +---+--------------+ | |FPGA:1 | |VGPU:8 | |FPGA:1 | |FPGA:1 | | | +-------+ +-------+ +-------+ +-------+ | +----+------+ +-----+------+ |br_int | | | |PHYSNET0 | +------+-----++-----+------+|BW_EGR:1000| |esn1 ||esn2 |+-----------+ |PHYSNET1 ||PHYSNET2 | |BW_EGR:10000||BW_EGR:20000| +------------++------------+ +--------------------+ | compute node (cn2) | | VCPU: 8 (3 used) | | MEMORY_MB: 2048 | >>....(from ss1)........| min_unit: 1024 | | step_size: 128 | | DISK_GB: 1000 | | traits: FOO | | agg: [aggA] | +---------+----------+ | +--------------------------+----+---------------------------+ | | | +-----+-----+ +-----+-----+ +-----+-----+ |nic1 | |nic2 | |nic3 | |HW_NIC_ROOT| |HW_NIC_ROOT| |HW_NIC_ROOT| +-----+-----+ +-----+-----+ +-----+-----+ | | | +----+----+ +---------+---+-----+---------+ | | | | | | | | +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ |pf1_1| |pf1_2| |pf2_1| |pf2_2| |pf2_3| |pf2_4| |pf3_1| |NET1 | |NET2 | |NET1 | |NET2 | |NET1 | |NET2 | |NET1 | |VF:4 | |VF:4 | |VF:2 | |VF:2 | |VF:2 | |VF:2 | |VF:8 | +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ """ # Having these here allows us to pre-create cn1 and cn2 providers in # DeepNUMANetworkFixture, where they have additional parents. cn1 = None cn2 = None def start_fixture(self): super(NUMANetworkFixture, self).start_fixture() self.make_entities() def make_entities(self): aggA_uuid = uuidutils.generate_uuid() os.environ['AGGA_UUID'] = aggA_uuid ss1 = tb.create_provider(self.context, 'ss1', aggA_uuid) tb.set_traits(ss1, ot.MISC_SHARES_VIA_AGGREGATE) tb.add_inventory(ss1, orc.DISK_GB, 2000) os.environ['SS1_UUID'] = ss1.uuid # CN1 if not self.cn1: self.cn1 = tb.create_provider(self.context, 'cn1', aggA_uuid) self.cn1.set_aggregates([aggA_uuid]) tb.set_traits(self.cn1, ot.COMPUTE_VOLUME_MULTI_ATTACH) os.environ['CN1_UUID'] = self.cn1.uuid numas = [] for i in (0, 1): numa = tb.create_provider( self.context, 'numa%d' % i, parent=self.cn1.uuid) traits = [ot.HW_NUMA_ROOT] if i == 1: traits.append('CUSTOM_FOO') tb.set_traits(numa, *traits) tb.add_inventory(numa, orc.VCPU, 4) numas.append(numa) os.environ['NUMA%d_UUID' % i] = numa.uuid tb.add_inventory( numas[0], orc.MEMORY_MB, 2048, min_unit=512, step_size=256) tb.add_inventory( numas[1], orc.MEMORY_MB, 2048, min_unit=256, max_unit=1024) user, proj = tb.create_user_and_project(self.context, prefix='numafx') consumer = tb.ensure_consumer(self.context, user, proj) tb.set_allocation(self.context, numas[0], consumer, {orc.VCPU: 2}) fpga = tb.create_provider(self.context, 'fpga0', parent=numas[0].uuid) # TODO(efried): Use standard FPGA resource class tb.add_inventory(fpga, 'CUSTOM_FPGA', 1) os.environ['FPGA0_UUID'] = fpga.uuid pgpu = tb.create_provider(self.context, 'pgpu0', parent=numas[0].uuid) tb.add_inventory(pgpu, orc.VGPU, 8) os.environ['PGPU0_UUID'] = pgpu.uuid for i in (0, 1): fpga = tb.create_provider( self.context, 'fpga1_%d' % i, parent=numas[1].uuid) # TODO(efried): Use standard FPGA resource class tb.add_inventory(fpga, 'CUSTOM_FPGA', 1) os.environ['FPGA1_%d_UUID' % i] = fpga.uuid agent = tb.create_provider( self.context, 'sriov_agent', parent=self.cn1.uuid) tb.set_traits(agent, 'CUSTOM_VNIC_TYPE_DIRECT') os.environ['SRIOV_AGENT_UUID'] = agent.uuid for i in (1, 2): dev = tb.create_provider( self.context, 'esn%d' % i, parent=agent.uuid) tb.set_traits(dev, 'CUSTOM_PHYSNET%d' % i) tb.add_inventory(dev, orc.NET_BW_EGR_KILOBIT_PER_SEC, 10000 * i) os.environ['ESN%d_UUID' % i] = dev.uuid agent = tb.create_provider( self.context, 'ovs_agent', parent=self.cn1.uuid) tb.set_traits(agent, 'CUSTOM_VNIC_TYPE_NORMAL') os.environ['OVS_AGENT_UUID'] = agent.uuid dev = tb.create_provider(self.context, 'br_int', parent=agent.uuid) tb.set_traits(dev, 'CUSTOM_PHYSNET0') tb.add_inventory(dev, orc.NET_BW_EGR_KILOBIT_PER_SEC, 1000) os.environ['BR_INT_UUID'] = dev.uuid # CN2 if not self.cn2: self.cn2 = tb.create_provider(self.context, 'cn2') self.cn2.set_aggregates([aggA_uuid]) tb.add_inventory(self.cn2, orc.VCPU, 8) # Get a new consumer consumer = tb.ensure_consumer(self.context, user, proj) tb.set_allocation(self.context, self.cn2, consumer, {orc.VCPU: 3}) tb.add_inventory( self.cn2, orc.MEMORY_MB, 2048, min_unit=1024, step_size=128) tb.add_inventory(self.cn2, orc.DISK_GB, 1000) tb.set_traits(self.cn2, 'CUSTOM_FOO') os.environ['CN2_UUID'] = self.cn2.uuid nics = [] for i in (1, 2, 3): nic = tb.create_provider( self.context, 'nic%d' % i, parent=self.cn2.uuid) # TODO(efried): Use standard HW_NIC_ROOT trait tb.set_traits(nic, 'CUSTOM_HW_NIC_ROOT') nics.append(nic) os.environ['NIC%s_UUID' % i] = nic.uuid # PFs for NIC1 for i in (1, 2): suf = '1_%d' % i pf = tb.create_provider( self.context, 'pf%s' % suf, parent=nics[0].uuid) tb.set_traits(pf, 'CUSTOM_PHYSNET%d' % i) # TODO(efried): Use standard generic VF resource class? tb.add_inventory(pf, 'CUSTOM_VF', 4) os.environ['PF%s_UUID' % suf] = pf.uuid # PFs for NIC2 for i in (0, 1, 2, 3): suf = '2_%d' % (i + 1) pf = tb.create_provider( self.context, 'pf%s' % suf, parent=nics[1].uuid) tb.set_traits(pf, 'CUSTOM_PHYSNET%d' % ((i % 2) + 1)) # TODO(efried): Use standard generic VF resource class? tb.add_inventory(pf, 'CUSTOM_VF', 2) os.environ['PF%s_UUID' % suf] = pf.uuid # PF for NIC3 suf = '3_1' pf = tb.create_provider( self.context, 'pf%s' % suf, parent=nics[2].uuid) tb.set_traits(pf, 'CUSTOM_PHYSNET1') # TODO(efried): Use standard generic VF resource class? tb.add_inventory(pf, 'CUSTOM_VF', 8) os.environ['PF%s_UUID' % suf] = pf.uuid class DeepNUMANetworkFixture(NUMANetworkFixture): """Extend the NUMANetworkFixture with two empty resource providers as parents and grandparents of the compute nodes. This is to exercise same_subtree in a more complete fashion. """ def make_entities(self): """Create parents and grandparents for cn1 and cn2. They will be fully populated by the superclass, NUMANetworkFixture. """ grandparent1 = tb.create_provider(self.context, 'gp1') parent1 = tb.create_provider( self.context, 'p1', parent=grandparent1.uuid) parent2 = tb.create_provider( self.context, 'p2', parent=grandparent1.uuid) self.cn1 = tb.create_provider(self.context, 'cn1', parent=parent1.uuid) self.cn2 = tb.create_provider(self.context, 'cn2', parent=parent2.uuid) super(DeepNUMANetworkFixture, self).make_entities() class NonSharedStorageFixture(APIFixture): """An APIFixture that has three compute nodes with local storage that do not use shared storage. """ def start_fixture(self): super(NonSharedStorageFixture, self).start_fixture() aggA_uuid = uuidutils.generate_uuid() aggB_uuid = uuidutils.generate_uuid() aggC_uuid = uuidutils.generate_uuid() os.environ['AGGA_UUID'] = aggA_uuid os.environ['AGGB_UUID'] = aggB_uuid os.environ['AGGC_UUID'] = aggC_uuid cn1 = tb.create_provider(self.context, 'cn1') cn2 = tb.create_provider(self.context, 'cn2') cn3 = tb.create_provider(self.context, 'cn3') os.environ['CN1_UUID'] = cn1.uuid os.environ['CN2_UUID'] = cn2.uuid os.environ['CN3_UUID'] = cn3.uuid # Populate compute node inventory for VCPU, RAM and DISK for cn in (cn1, cn2, cn3): tb.add_inventory(cn, 'VCPU', 24) tb.add_inventory(cn, 'MEMORY_MB', 128 * 1024) tb.add_inventory(cn, 'DISK_GB', 2000) class CORSFixture(APIFixture): """An APIFixture that turns on CORS.""" def start_fixture(self): super(CORSFixture, self).start_fixture() # Turn on the CORS middleware by setting 'allowed_origin'. self.conf_fixture.config( group='cors', allowed_origin='http://valid.example.com') self.conf_fixture.config( group='cors', allow_headers=['openstack-api-version']) class GranularFixture(APIFixture): """An APIFixture that sets up the following provider environment for testing granular resource requests. +========================++========================++========================+ |cn_left ||cn_middle ||cn_right | |VCPU: 8 ||VCPU: 8 ||VCPU: 8 | |MEMORY_MB: 4096 ||MEMORY_MB: 4096 ||MEMORY_MB: 4096 | |DISK_GB: 500 ||SRIOV_NET_VF: 8 ||DISK_GB: 500 | |VGPU: 8 ||CUSTOM_NET_MBPS: 4000 ||VGPU: 8 | |SRIOV_NET_VF: 8 ||traits: HW_CPU_X86_AVX, || - max_unit: 2 | |CUSTOM_NET_MBPS: 4000 || HW_CPU_X86_AVX2,||traits: HW_CPU_X86_MMX, | |traits: HW_CPU_X86_AVX, || HW_CPU_X86_SSE, || HW_GPU_API_DXVA,| | HW_CPU_X86_AVX2,|| HW_NIC_ACCEL_TLS|| CUSTOM_DISK_SSD,| | HW_GPU_API_DXVA,|+=+=====+================++==+========+============+ | HW_NIC_DCB_PFC, | : : : : a | CUSTOM_FOO +..+ +--------------------+ : g +========================+ : a : : g : g : : C +========================+ : g : +===============+======+ |shr_disk_1 | : A : |shr_net | |DISK_GB: 1000 +..+ : |SRIOV_NET_VF: 16 | |traits: CUSTOM_DISK_SSD,| : : a |CUSTOM_NET_MBPS: 40000| | MISC_SHARES_VIA_AGG...| : : g |traits: MISC_SHARES...| +========================+ : : g +======================+ +=======================+ : : B |shr_disk_2 +...+ : |DISK_GB: 1000 | : |traits: MISC_SHARES... +.........+ +=======================+ """ def start_fixture(self): super(GranularFixture, self).start_fixture() rc_obj.ResourceClass( context=self.context, name='CUSTOM_NET_MBPS').create() os.environ['AGGA'] = uuids.aggA os.environ['AGGB'] = uuids.aggB os.environ['AGGC'] = uuids.aggC cn_left = tb.create_provider(self.context, 'cn_left', uuids.aggA) os.environ['CN_LEFT'] = cn_left.uuid tb.add_inventory(cn_left, 'VCPU', 8) tb.add_inventory(cn_left, 'MEMORY_MB', 4096) tb.add_inventory(cn_left, 'DISK_GB', 500) tb.add_inventory(cn_left, 'VGPU', 8) tb.add_inventory(cn_left, 'SRIOV_NET_VF', 8) tb.add_inventory(cn_left, 'CUSTOM_NET_MBPS', 4000) tb.set_traits(cn_left, 'HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2', 'HW_GPU_API_DXVA', 'HW_NIC_DCB_PFC', 'CUSTOM_FOO') cn_middle = tb.create_provider( self.context, 'cn_middle', uuids.aggA, uuids.aggB) os.environ['CN_MIDDLE'] = cn_middle.uuid tb.add_inventory(cn_middle, 'VCPU', 8) tb.add_inventory(cn_middle, 'MEMORY_MB', 4096) tb.add_inventory(cn_middle, 'SRIOV_NET_VF', 8) tb.add_inventory(cn_middle, 'CUSTOM_NET_MBPS', 4000) tb.set_traits(cn_middle, 'HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2', 'HW_CPU_X86_SSE', 'HW_NIC_ACCEL_TLS') cn_right = tb.create_provider( self.context, 'cn_right', uuids.aggB, uuids.aggC) os.environ['CN_RIGHT'] = cn_right.uuid tb.add_inventory(cn_right, 'VCPU', 8) tb.add_inventory(cn_right, 'MEMORY_MB', 4096) tb.add_inventory(cn_right, 'DISK_GB', 500) tb.add_inventory(cn_right, 'VGPU', 8, max_unit=2) tb.set_traits(cn_right, 'HW_CPU_X86_MMX', 'HW_GPU_API_DXVA', 'CUSTOM_DISK_SSD') shr_disk_1 = tb.create_provider(self.context, 'shr_disk_1', uuids.aggA) os.environ['SHR_DISK_1'] = shr_disk_1.uuid tb.add_inventory(shr_disk_1, 'DISK_GB', 1000) tb.set_traits(shr_disk_1, 'MISC_SHARES_VIA_AGGREGATE', 'CUSTOM_DISK_SSD') shr_disk_2 = tb.create_provider( self.context, 'shr_disk_2', uuids.aggA, uuids.aggB) os.environ['SHR_DISK_2'] = shr_disk_2.uuid tb.add_inventory(shr_disk_2, 'DISK_GB', 1000) tb.set_traits(shr_disk_2, 'MISC_SHARES_VIA_AGGREGATE') shr_net = tb.create_provider(self.context, 'shr_net', uuids.aggC) os.environ['SHR_NET'] = shr_net.uuid tb.add_inventory(shr_net, 'SRIOV_NET_VF', 16) tb.add_inventory(shr_net, 'CUSTOM_NET_MBPS', 40000) tb.set_traits(shr_net, 'MISC_SHARES_VIA_AGGREGATE') class OpenPolicyFixture(APIFixture): """An APIFixture that changes all policy rules to allow non-admins.""" def start_fixture(self): super(OpenPolicyFixture, self).start_fixture() # Get all of the registered rules and set them to '@' to allow any # user to have access. The nova policy "admin_or_owner" concept does # not really apply to most of placement resources since they do not # have a user_id/project_id attribute. rules = {} for rule in policies.list_rules(): name = rule.name # Ignore "base" rules for role:admin. if name in ('admin_api',): continue rules[name] = '@' self.policy_fixture.set_rules(rules) def stop_fixture(self): super(OpenPolicyFixture, self).stop_fixture() class SecureRBACPolicyFixture(APIFixture): """An APIFixture that enforce secure default policies and scope.""" _secure_rbac = True # Even though this just configures the defaults for enforce_scope and # enforce_new_default, it's useful because it's explicit in saying we're # testing old policy behavior. We can remove this once placement removes its # deprecated policies. class LegacyRBACPolicyFixture(APIFixture): """An APIFixture that enforce deprecated policies.""" _secure_rbac = False class NeutronQoSMultiSegmentFixture(APIFixture): """A Gabbi API fixture that creates compute trees simulating Neutron configured with QoS min bw and min packet rate features in multisegment networks. """ # Have 4 trees. 3 trees with the structure of: # # compute # \ VCPU:8, MEMORY_MB:2095, DISK_GB:500 # |\ # | - Open vSwitch agent # | | NET_PACKET_RATE_KILOPACKET_PER_SEC: 1000 # | \ CUSTOM_VNIC_TYPE_NORMAL # | \ # | - br-ex # | NET_BW_EGR_KILOBIT_PER_SEC: 5000 # | NET_BW_IGR_KILOBIT_PER_SEC: 5000 # | CUSTOM_VNIC_TYPE_NORMAL # | CUSTOM_PHYSNET_??? # \ # - NIC Switch agent # | CUSTOM_VNIC_TYPE_DIRECT # | CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL # | CUSTOM_VNIC_TYPE_MACVTAP # \ # \ # - enp129s0f0 # NET_BW_EGR_KILOBIT_PER_SEC: 10000 # NET_BW_IGR_KILOBIT_PER_SEC: 10000 # CUSTOM_VNIC_TYPE_DIRECT # CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL # CUSTOM_VNIC_TYPE_MACVTAP # 'CUSTOM_PHYSNET_???' # # For CUSTOM_PHYSNET_??? define the network segment connectivity # compute0: CUSTOM_PHYSNET_OTHER # compute1: CUSTOM_PHYSNET_MSN_S1 # compute2: CUSTOM_PHYSNET_MSN_S2 # # There is a 4th compute that has duplicate network connectivity: # compute3-br-ex is connected to CUSTOM_PHYSNET_MSN_S1 # compute3-br-ex2 is connected to CUSTOM_PHYSNET_MSN_S2 # compute3-enp129s0f0 is connected to CUSTOM_PHYSNET_MSN_S1 # compute3-enp129s0f1 is connected to CUSTOM_PHYSNET_MSN_S2 # but also compute3 has limited bandwidth capacity def start_fixture(self): super(NeutronQoSMultiSegmentFixture, self).start_fixture() # compute 0 with not connectivity to the multi segment network compute0 = tb.create_provider(self.context, 'compute0') os.environ['compute0'] = compute0.uuid tb.add_inventory(compute0, 'VCPU', 8) tb.add_inventory(compute0, 'MEMORY_MB', 4096) tb.add_inventory(compute0, 'DISK_GB', 500) # OVS agent subtree compute0_ovs_agent = tb.create_provider( self.context, 'compute0:Open vSwitch agent', parent=compute0.uuid) os.environ['compute0:ovs_agent'] = compute0_ovs_agent.uuid tb.add_inventory( compute0_ovs_agent, 'NET_PACKET_RATE_KILOPACKET_PER_SEC', 1000) tb.set_traits( compute0_ovs_agent, 'CUSTOM_VNIC_TYPE_NORMAL', ) compute0_br_ex = tb.create_provider( self.context, 'compute0:Open vSwitch agent:br-ex', parent=compute0_ovs_agent.uuid ) os.environ['compute0:br_ex'] = compute0_br_ex.uuid tb.add_inventory( compute0_br_ex, 'NET_BW_EGR_KILOBIT_PER_SEC', 5000) tb.add_inventory( compute0_br_ex, 'NET_BW_IGR_KILOBIT_PER_SEC', 5000) tb.set_traits( compute0_br_ex, 'CUSTOM_VNIC_TYPE_NORMAL', 'CUSTOM_PHYSNET_OTHER', ) # SRIOV agent subtree compute0_sriov_agent = tb.create_provider( self.context, 'compute0:NIC Switch agent', parent=compute0.uuid) os.environ['compute0:sriov_agent'] = compute0_sriov_agent.uuid tb.set_traits( compute0_sriov_agent, 'CUSTOM_VNIC_TYPE_DIRECT', 'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL', 'CUSTOM_VNIC_TYPE_MACVTAP', ) compute0_pf0 = tb.create_provider( self.context, 'compute0:NIC Switch agent:enp129s0f0', parent=compute0_sriov_agent.uuid ) os.environ['compute0:pf0'] = compute0_pf0.uuid tb.add_inventory( compute0_pf0, 'NET_BW_EGR_KILOBIT_PER_SEC', 10000) tb.add_inventory( compute0_pf0, 'NET_BW_IGR_KILOBIT_PER_SEC', 10000) tb.set_traits( compute0_pf0, 'CUSTOM_VNIC_TYPE_DIRECT', 'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL', 'CUSTOM_VNIC_TYPE_MACVTAP', 'CUSTOM_PHYSNET_OTHER', ) # compute 1 with network connectivity to segment 1 compute1 = tb.create_provider(self.context, 'compute1') os.environ['compute1'] = compute1.uuid tb.add_inventory(compute1, 'VCPU', 8) tb.add_inventory(compute1, 'MEMORY_MB', 4096) tb.add_inventory(compute1, 'DISK_GB', 500) # OVS agent subtree compute1_ovs_agent = tb.create_provider( self.context, 'compute1:Open vSwitch agent', parent=compute1.uuid) os.environ['compute1:ovs_agent'] = compute1_ovs_agent.uuid tb.add_inventory( compute1_ovs_agent, 'NET_PACKET_RATE_KILOPACKET_PER_SEC', 1000) tb.set_traits( compute1_ovs_agent, 'CUSTOM_VNIC_TYPE_NORMAL', ) compute1_br_ex = tb.create_provider( self.context, 'compute1:Open vSwitch agent:br-ex', parent=compute1_ovs_agent.uuid ) os.environ['compute1:br_ex'] = compute1_br_ex.uuid tb.add_inventory( compute1_br_ex, 'NET_BW_EGR_KILOBIT_PER_SEC', 5000) tb.add_inventory( compute1_br_ex, 'NET_BW_IGR_KILOBIT_PER_SEC', 5000) tb.set_traits( compute1_br_ex, 'CUSTOM_VNIC_TYPE_NORMAL', 'CUSTOM_PHYSNET_MSN_S1', ) # SRIOV agent subtree compute1_sriov_agent = tb.create_provider( self.context, 'compute1:NIC Switch agent', parent=compute1.uuid) os.environ['compute1:sriov_agent'] = compute1_sriov_agent.uuid tb.set_traits( compute1_sriov_agent, 'CUSTOM_VNIC_TYPE_DIRECT', 'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL', 'CUSTOM_VNIC_TYPE_MACVTAP', ) compute1_pf0 = tb.create_provider( self.context, 'compute1:NIC Switch agent:enp129s0f0', parent=compute1_sriov_agent.uuid ) os.environ['compute1:pf0'] = compute1_pf0.uuid tb.add_inventory( compute1_pf0, 'NET_BW_EGR_KILOBIT_PER_SEC', 10000) tb.add_inventory( compute1_pf0, 'NET_BW_IGR_KILOBIT_PER_SEC', 10000) tb.set_traits( compute1_pf0, 'CUSTOM_VNIC_TYPE_DIRECT', 'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL', 'CUSTOM_VNIC_TYPE_MACVTAP', 'CUSTOM_PHYSNET_MSN_S1', ) # compute 2 with network connectivity to segment 2 compute2 = tb.create_provider(self.context, 'compute2') os.environ['compute2'] = compute2.uuid tb.add_inventory(compute2, 'VCPU', 8) tb.add_inventory(compute2, 'MEMORY_MB', 4096) tb.add_inventory(compute2, 'DISK_GB', 500) # OVS agent subtree compute2_ovs_agent = tb.create_provider( self.context, 'compute2:Open vSwitch agent', parent=compute2.uuid) os.environ['compute2:ovs_agent'] = compute2_ovs_agent.uuid tb.add_inventory( compute2_ovs_agent, 'NET_PACKET_RATE_KILOPACKET_PER_SEC', 1000) tb.set_traits( compute2_ovs_agent, 'CUSTOM_VNIC_TYPE_NORMAL', ) compute2_br_ex = tb.create_provider( self.context, 'compute2:Open vSwitch agent:br-ex', parent=compute2_ovs_agent.uuid ) os.environ['compute2:br_ex'] = compute2_br_ex.uuid tb.add_inventory( compute2_br_ex, 'NET_BW_EGR_KILOBIT_PER_SEC', 5000) tb.add_inventory( compute2_br_ex, 'NET_BW_IGR_KILOBIT_PER_SEC', 5000) tb.set_traits( compute2_br_ex, 'CUSTOM_VNIC_TYPE_NORMAL', 'CUSTOM_PHYSNET_MSN_S2', ) # SRIOV agent subtree compute2_sriov_agent = tb.create_provider( self.context, 'compute2:NIC Switch agent', parent=compute2.uuid) os.environ['compute2:sriov_agent'] = compute2_sriov_agent.uuid tb.set_traits( compute2_sriov_agent, 'CUSTOM_VNIC_TYPE_DIRECT', 'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL', 'CUSTOM_VNIC_TYPE_MACVTAP', ) compute2_pf0 = tb.create_provider( self.context, 'compute2:NIC Switch agent:enp129s0f0', parent=compute2_sriov_agent.uuid ) os.environ['compute2:pf0'] = compute2_pf0.uuid tb.add_inventory( compute2_pf0, 'NET_BW_EGR_KILOBIT_PER_SEC', 10000) tb.add_inventory( compute2_pf0, 'NET_BW_IGR_KILOBIT_PER_SEC', 10000) tb.set_traits( compute2_pf0, 'CUSTOM_VNIC_TYPE_DIRECT', 'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL', 'CUSTOM_VNIC_TYPE_MACVTAP', 'CUSTOM_PHYSNET_MSN_S2', ) # compute 3 with network connectivity to both segment 1 and 2 compute3 = tb.create_provider(self.context, 'compute3') os.environ['compute3'] = compute3.uuid tb.add_inventory(compute3, 'VCPU', 8) tb.add_inventory(compute3, 'MEMORY_MB', 4096) tb.add_inventory(compute3, 'DISK_GB', 500) # OVS agent subtree compute3_ovs_agent = tb.create_provider( self.context, 'compute3:Open vSwitch agent', parent=compute3.uuid) os.environ['compute3:ovs_agent'] = compute3_ovs_agent.uuid tb.add_inventory( compute3_ovs_agent, 'NET_PACKET_RATE_KILOPACKET_PER_SEC', 1000) tb.set_traits( compute3_ovs_agent, 'CUSTOM_VNIC_TYPE_NORMAL', ) compute3_br_ex = tb.create_provider( self.context, 'compute3:Open vSwitch agent:br-ex', parent=compute3_ovs_agent.uuid ) os.environ['compute3:br_ex'] = compute3_br_ex.uuid tb.add_inventory( compute3_br_ex, 'NET_BW_EGR_KILOBIT_PER_SEC', 1000) tb.add_inventory( compute3_br_ex, 'NET_BW_IGR_KILOBIT_PER_SEC', 1000) tb.set_traits( compute3_br_ex, 'CUSTOM_VNIC_TYPE_NORMAL', 'CUSTOM_PHYSNET_MSN_S1', ) compute3_br_ex2 = tb.create_provider( self.context, 'compute3:Open vSwitch agent:br-ex2', parent=compute3_ovs_agent.uuid ) os.environ['compute3:br_ex2'] = compute3_br_ex2.uuid tb.add_inventory( compute3_br_ex2, 'NET_BW_EGR_KILOBIT_PER_SEC', 1000) tb.add_inventory( compute3_br_ex2, 'NET_BW_IGR_KILOBIT_PER_SEC', 1000) tb.set_traits( compute3_br_ex2, 'CUSTOM_VNIC_TYPE_NORMAL', 'CUSTOM_PHYSNET_MSN_S2', ) # SRIOV agent subtree compute3_sriov_agent = tb.create_provider( self.context, 'compute3:NIC Switch agent', parent=compute3.uuid) os.environ['compute3:sriov_agent'] = compute2_sriov_agent.uuid tb.set_traits( compute3_sriov_agent, 'CUSTOM_VNIC_TYPE_DIRECT', 'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL', 'CUSTOM_VNIC_TYPE_MACVTAP', ) compute3_pf0 = tb.create_provider( self.context, 'compute3:NIC Switch agent:enp129s0f0', parent=compute3_sriov_agent.uuid ) os.environ['compute3:pf0'] = compute3_pf0.uuid tb.add_inventory( compute3_pf0, 'NET_BW_EGR_KILOBIT_PER_SEC', 1000) tb.add_inventory( compute3_pf0, 'NET_BW_IGR_KILOBIT_PER_SEC', 1000) tb.set_traits( compute3_pf0, 'CUSTOM_VNIC_TYPE_DIRECT', 'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL', 'CUSTOM_VNIC_TYPE_MACVTAP', 'CUSTOM_PHYSNET_MSN_S1', ) compute3_pf1 = tb.create_provider( self.context, 'compute3:NIC Switch agent:enp129s0f1', parent=compute3_sriov_agent.uuid ) os.environ['compute3:pf1'] = compute3_pf1.uuid tb.add_inventory( compute3_pf1, 'NET_BW_EGR_KILOBIT_PER_SEC', 1000) tb.add_inventory( compute3_pf1, 'NET_BW_IGR_KILOBIT_PER_SEC', 1000) tb.set_traits( compute3_pf1, 'CUSTOM_VNIC_TYPE_DIRECT', 'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL', 'CUSTOM_VNIC_TYPE_MACVTAP', 'CUSTOM_PHYSNET_MSN_S2', ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/fixtures/placement.py0000664000175000017500000000776500000000000027323 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_policy import opts as policy_opts from oslo_utils import uuidutils from wsgi_intercept import interceptor from placement import conf from placement import deploy from placement.tests import fixtures as db_fixture from placement.tests.unit import policy_fixture class PlacementFixture(fixtures.Fixture): """A fixture to placement operations. Runs a local WSGI server bound on a free port and having the Placement application with NoAuth middleware. Optionally, the caller can choose to not use a wsgi-intercept and use this fixture to set up configuration and (optionally) the database. It's possible to ask for a specific token when running the fixtures so all calls would be passing this token. This fixture takes care of starting a fixture for an in-RAM placement database, unless the db kwarg is False. Used by other services, including nova, for functional tests. """ def __init__(self, token='admin', conf_fixture=None, db=True, use_intercept=True, register_opts=True): """Create a Placement Fixture. :param token: The value to be used when passing an auth token header in HTTP requests. :param conf_fixture: An oslo_conf.fixture.Config. If provided, config will be based from it. :param db: Whether to start the Database fixture. :param use_intercept: If true, install a wsgi-intercept of the placement WSGI app. :param register_opts: If True, register configuration options. """ self.token = token self.db = db self.use_intercept = use_intercept self.conf_fixture = conf_fixture self.register_opts = register_opts def setUp(self): super(PlacementFixture, self).setUp() if not self.conf_fixture: config = cfg.ConfigOpts() self.conf_fixture = self.useFixture(config_fixture.Config(config)) if self.register_opts: conf.register_opts(self.conf_fixture.conf) if self.db: self.useFixture(db_fixture.Database(self.conf_fixture, set_config=True)) # NOTE(gmann): Set enforce_scope and enforce_new_defaults to the # same value it is for placement service. We need to explicitly set # it here because this fixture is called by Nova functional tests and # Nova default of these config options is changed to True. To avoid # Placement service running with what Nova using in functional tests # we need to set them to False here. policy_opts.set_defaults(self.conf_fixture.conf, enforce_scope=False, enforce_new_defaults=False) self.conf_fixture.config(group='api', auth_strategy='noauth2') self.conf_fixture.conf([], default_config_files=[]) self.useFixture(policy_fixture.PolicyFixture(self.conf_fixture)) if self.use_intercept: loader = deploy.loadapp(self.conf_fixture.conf) def app(): return loader self.endpoint = 'http://%s/placement' % uuidutils.generate_uuid() intercept = interceptor.RequestsInterceptor(app, url=self.endpoint) intercept.install_intercept() self.addCleanup(intercept.uninstall_intercept) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.276778 openstack_placement-13.0.0/placement/tests/functional/gabbits/0000775000175000017500000000000000000000000024524 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/aggregate-legacy-rbac.yaml0000664000175000017500000000717300000000000031515 0ustar00zuulzuul00000000000000--- # Test the CRUD operations on /resource_providers/{uuid}/aggregates* using a # system administrator context. fixtures: - LegacyRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &agg_1 f918801a-5e54-4bee-9095-09a9d0c786b8 - &agg_2 a893eb5c-e2a0-4251-ab26-f71d3b0cfc0b tests: - name: system admin can create new resource provider POST: /resource_providers request_headers: *system_admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: system reader cannot update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *system_reader_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 403 - name: project member cannot update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_member_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 403 - name: project reader cannot update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_reader_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 403 - name: project admin can update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_admin_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 200 - name: system admin can update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *system_admin_headers data: resource_provider_generation: 1 aggregates: - *agg_1 - *agg_2 status: 200 - name: system admin can list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *system_admin_headers response_json_paths: $.aggregates.`len`: 2 - name: system reader cannot list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *system_reader_headers status: 403 - name: project admin can list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_admin_headers response_json_paths: $.aggregates.`len`: 2 - name: project member cannot list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_member_headers status: 403 - name: project reader cannot list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_reader_headers status: 403 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/aggregate-policy.yaml0000664000175000017500000000171500000000000030637 0ustar00zuulzuul00000000000000# This tests the individual CRUD operations on # /resource_providers/{uuid}/aggregates* using a non-admin user with an # open policy configuration. The response validation is intentionally minimal. fixtures: - OpenPolicyFixture defaults: request_headers: x-auth-token: user accept: application/json content-type: application/json openstack-api-version: placement latest vars: - &agg_1 f918801a-5e54-4bee-9095-09a9d0c786b8 - &agg_2 a893eb5c-e2a0-4251-ab26-f71d3b0cfc0b tests: - name: post new resource provider POST: /resource_providers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: put some aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 200 - name: get those aggregates GET: $LAST_URL response_json_paths: $.aggregates.`len`: 2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/aggregate-secure-rbac.yaml0000664000175000017500000001245400000000000031535 0ustar00zuulzuul00000000000000--- # Test the CRUD operations on /resource_providers/{uuid}/aggregates* using a # system administrator context. fixtures: - SecureRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &admin_project_id $ENVIRON['ADMIN_PROJECT_ID'] - &service_project_id $ENVIRON['SERVICE_PROJECT_ID'] - &admin_headers x-auth-token: user x-roles: admin x-project-id: admin_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &service_headers x-auth-token: user x-roles: service x-project-id: service_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &agg_1 f918801a-5e54-4bee-9095-09a9d0c786b8 - &agg_2 a893eb5c-e2a0-4251-ab26-f71d3b0cfc0b tests: - name: admin can create new resource provider POST: /resource_providers request_headers: *admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: service can create new resource provider POST: /resource_providers request_headers: *service_headers data: name: $ENVIRON['RP_NAME1'] uuid: $ENVIRON['RP_UUID1'] status: 200 - name: project admin can create new resource provider POST: /resource_providers request_headers: *project_admin_headers data: name: $ENVIRON['RP_NAME2'] uuid: $ENVIRON['RP_UUID2'] status: 200 - name: system reader cannot update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *system_reader_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 403 - name: project admin can update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID2']/aggregates request_headers: *project_admin_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 200 - name: admin can update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *admin_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 200 - name: service can update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID1']/aggregates request_headers: *service_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 200 - name: project member cannot update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_member_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 403 - name: project reader cannot update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_reader_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 403 - name: system admin cannot update aggregates PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *system_admin_headers data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 403 - name: system admin cannot list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *system_admin_headers status: 403 - name: system reader cannot list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *system_reader_headers status: 403 - name: admin can list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *admin_headers response_json_paths: $.aggregates.`len`: 2 - name: service can list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *service_headers response_json_paths: $.aggregates.`len`: 2 - name: project admin can list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_admin_headers response_json_paths: $.aggregates.`len`: 2 - name: project member cannot list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_member_headers status: 403 - name: project reader cannot list aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: *project_reader_headers status: 403 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/aggregate.yaml0000664000175000017500000001166700000000000027351 0ustar00zuulzuul00000000000000 fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement latest vars: - &agg_1 f918801a-5e54-4bee-9095-09a9d0c786b8 - &agg_2 a893eb5c-e2a0-4251-ab26-f71d3b0cfc0b tests: - name: get aggregates for bad resource provider GET: /resource_providers/6984bb2d-830d-4c8d-ac64-c5a8103664be/aggregates status: 404 response_json_paths: $.errors[0].title: Not Found - name: put aggregates for bad resource provider PUT: /resource_providers/6984bb2d-830d-4c8d-ac64-c5a8103664be/aggregates data: [] status: 404 response_json_paths: $.errors[0].title: Not Found - name: post new resource provider POST: /resource_providers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 response_headers: location: //resource_providers/[a-f0-9-]+/ - name: get empty aggregates GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates response_json_paths: $.aggregates: [] - name: aggregates 404 for out of date microversion get GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: openstack-api-version: placement 1.0 status: 404 response_json_paths: $.errors[0].title: Not Found - name: aggregates 404 for out of date microversion put PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: openstack-api-version: placement 1.0 status: 404 response_json_paths: $.errors[0].title: Not Found - name: put some aggregates - old payload and new microversion PUT: $LAST_URL data: - *agg_1 - *agg_2 status: 400 response_strings: - JSON does not validate response_json_paths: $.errors[0].title: Bad Request - name: put some aggregates - new payload and old microversion PUT: $LAST_URL request_headers: openstack-api-version: placement 1.18 data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 400 response_strings: - JSON does not validate response_json_paths: $.errors[0].title: Bad Request - name: put some aggregates - new payload and new microversion PUT: $LAST_URL data: resource_provider_generation: 0 aggregates: - *agg_1 - *agg_2 status: 200 response_headers: content-type: /application/json/ cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ response_json_paths: $.aggregates[0]: *agg_1 $.aggregates[1]: *agg_2 $.resource_provider_generation: 1 - name: get those aggregates GET: $LAST_URL response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ response_json_paths: $.aggregates.`len`: 2 - name: clear those aggregates - generation conflict PUT: $LAST_URL data: resource_provider_generation: 0 aggregates: [] status: 409 response_json_paths: $.errors[0].code: placement.concurrent_update - name: clear those aggregates PUT: $LAST_URL data: resource_provider_generation: 1 aggregates: [] status: 200 response_json_paths: $.aggregates: [] - name: get empty aggregates again GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates response_json_paths: $.aggregates: [] - name: put non json PUT: $LAST_URL data: '{"bad", "not json"}' status: 400 response_strings: - Malformed JSON response_json_paths: $.errors[0].title: Bad Request - name: put invalid json no generation PUT: $LAST_URL data: aggregates: - *agg_1 - *agg_2 status: 400 response_strings: - JSON does not validate response_json_paths: $.errors[0].title: Bad Request - name: put invalid json not uuids PUT: $LAST_URL data: aggregates: - harry - sally resource_provider_generation: 2 status: 400 response_strings: - "is not a 'uuid'" response_json_paths: $.errors[0].title: Bad Request - name: put same aggregates twice PUT: $LAST_URL data: aggregates: - *agg_1 - *agg_1 resource_provider_generation: 2 status: 400 response_strings: - has non-unique elements response_json_paths: $.errors[0].title: Bad Request # The next two tests confirm that prior to version 1.15 we do # not set the cache-control or last-modified headers on either # PUT or GET. - name: put some aggregates v1.14 PUT: $LAST_URL request_headers: openstack-api-version: placement 1.14 data: - *agg_1 - *agg_2 response_forbidden_headers: - last-modified - cache-control - name: get those aggregates v1.14 GET: $LAST_URL request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - last-modified - cache-control ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-bad-class.yaml0000664000175000017500000000413700000000000031371 0ustar00zuulzuul00000000000000 fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json # Using <= 1.11 allows the PUT /allocations/{uuid} below # to work with the older request form. openstack-api-version: placement 1.11 tests: - name: create a resource provider POST: /resource_providers data: name: an rp status: 201 - name: get resource provider GET: $LOCATION status: 200 - name: create a resource class PUT: /resource_classes/CUSTOM_GOLD status: 201 - name: add inventory to an rp PUT: /resource_providers/$HISTORY['get resource provider'].$RESPONSE['$.uuid']/inventories data: resource_provider_generation: 0 inventories: VCPU: total: 24 CUSTOM_GOLD: total: 5 status: 200 - name: allocate some of it two desc: this is the one that used to raise a 500 PUT: /allocations/6d9f83db-6eb5-49f6-84b0-5d03c6aa9fc8 data: allocations: - resource_provider: uuid: $HISTORY['get resource provider'].$RESPONSE['$.uuid'] resources: DISK_GB: 5 CUSTOM_GOLD: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 409 - name: allocate some of it custom PUT: /allocations/6d9f83db-6eb5-49f6-84b0-5d03c6aa9fc8 data: allocations: - resource_provider: uuid: $HISTORY['get resource provider'].$RESPONSE['$.uuid'] resources: CUSTOM_GOLD: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: allocate some of it standard PUT: /allocations/6d9f83db-6eb5-49f6-84b0-5d03c6aa9fc8 data: allocations: - resource_provider: uuid: $HISTORY['get resource provider'].$RESPONSE['$.uuid'] resources: DISK_GB: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 409 ././@PaxHeader0000000000000000000000000000021200000000000011450 xustar0000000000000000116 path=openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-any-traits-groups.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-any-traits-group0000664000175000017500000002631500000000000034123 0ustar00zuulzuul00000000000000fixtures: - NeutronQoSMultiSegmentFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement latest tests: - name: a VM with single port on a non multisegment network # only compute0 has access to the non-multi-segment network GET: >- /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:10 &resources-port-normal-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:1000 &required-port-normal-pps=CUSTOM_VNIC_TYPE_NORMAL &resources-port-normal-bw=NET_BW_EGR_KILOBIT_PER_SEC:1000,NET_BW_IGR_KILOBIT_PER_SEC:1000 &required-port-normal-bw=CUSTOM_VNIC_TYPE_NORMAL,CUSTOM_PHYSNET_OTHER &same_subtree=-port-normal-pps,-port-normal-bw &group_policy=none status: 200 response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['compute0']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['compute0']"].resources[MEMORY_MB]: 1024 $.allocation_requests..allocations["$ENVIRON['compute0']"].resources[DISK_GB]: 10 $.allocation_requests..allocations["$ENVIRON['compute0:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute0:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute0:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000 - name: a VM with single port on the multi-segment network # compute1 compute2 has both access to one segment while compute3 has access # to two segments so compute1,2 will have one candidate while compute 3 will # have two GET: >- /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:10 &resources-port-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:1000 &required-port-msn-pps=CUSTOM_VNIC_TYPE_NORMAL &resources-port-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:1000,NET_BW_IGR_KILOBIT_PER_SEC:1000 &required-port-msn-bw=CUSTOM_VNIC_TYPE_NORMAL &required-port-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2 &same_subtree=-port-msn-pps,-port-msn-bw &group_policy=none status: 200 response_json_paths: $.allocation_requests.`len`: 4 $.allocation_requests..allocations["$ENVIRON['compute1']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['compute1']"].resources[MEMORY_MB]: 1024 $.allocation_requests..allocations["$ENVIRON['compute1']"].resources[DISK_GB]: 10 $.allocation_requests..allocations["$ENVIRON['compute1:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute2']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['compute2']"].resources[MEMORY_MB]: 1024 $.allocation_requests..allocations["$ENVIRON['compute2']"].resources[DISK_GB]: 10 $.allocation_requests..allocations["$ENVIRON['compute2:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute3']"].resources[VCPU]: [1, 1] $.allocation_requests..allocations["$ENVIRON['compute3']"].resources[MEMORY_MB]: [1024, 1024] $.allocation_requests..allocations["$ENVIRON['compute3']"].resources[DISK_GB]: [10, 10] $.allocation_requests..allocations["$ENVIRON['compute3:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: [1000, 1000] $.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000 $.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000 - name: a VM with two ports on the multi-segment network limited bandwidth # similarly to the single port test compute 1 and compute 2 can offer one # allocation candidate as both port fits to the one segment of each compute. # However, compute3 only has enough bandwidth capacity for one port per # connected network segment. So either we allocate port1-segment1 and # port2-segment2 OR port1-segment2 and port2-segment1 GET: >- /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:10 &resources-port1-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:100 &required-port1-msn-pps=CUSTOM_VNIC_TYPE_NORMAL &resources-port1-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:1000,NET_BW_IGR_KILOBIT_PER_SEC:1000 &required-port1-msn-bw=CUSTOM_VNIC_TYPE_NORMAL &required-port1-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2 &same_subtree=-port1-msn-pps,-port1-msn-bw &resources-port2-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:100 &required-port2-msn-pps=CUSTOM_VNIC_TYPE_NORMAL &resources-port2-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:1000,NET_BW_IGR_KILOBIT_PER_SEC:1000 &required-port2-msn-bw=CUSTOM_VNIC_TYPE_NORMAL &required-port2-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2 &same_subtree=-port2-msn-pps,-port2-msn-bw &group_policy=none status: 200 response_json_paths: $.allocation_requests.`len`: 4 $.allocation_requests..allocations["$ENVIRON['compute1']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['compute1']"].resources[MEMORY_MB]: 1024 $.allocation_requests..allocations["$ENVIRON['compute1']"].resources[DISK_GB]: 10 $.allocation_requests..allocations["$ENVIRON['compute1:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 200 $.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 2000 $.allocation_requests..allocations["$ENVIRON['compute2']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['compute2']"].resources[MEMORY_MB]: 1024 $.allocation_requests..allocations["$ENVIRON['compute2']"].resources[DISK_GB]: 10 $.allocation_requests..allocations["$ENVIRON['compute2:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 200 $.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 2000 $.allocation_requests..allocations["$ENVIRON['compute3']"].resources[VCPU]: [1, 1] $.allocation_requests..allocations["$ENVIRON['compute3']"].resources[MEMORY_MB]: [1024, 1024] $.allocation_requests..allocations["$ENVIRON['compute3']"].resources[DISK_GB]: [10, 10] $.allocation_requests..allocations["$ENVIRON['compute3:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: [200, 200] $.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: [1000, 1000] $.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: [1000, 1000] - name: a VM with two ports on the multi-segment network # similar test as the previous but the bandwidth request is decreased so # that compute3 now can fit both ports into one segment. This means compute3 # now has 4 candidates GET: >- /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:10 &resources-port1-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:100 &required-port1-msn-pps=CUSTOM_VNIC_TYPE_NORMAL &resources-port1-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:100,NET_BW_IGR_KILOBIT_PER_SEC:100 &required-port1-msn-bw=CUSTOM_VNIC_TYPE_NORMAL &required-port1-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2 &same_subtree=-port1-msn-pps,-port1-msn-bw &resources-port2-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:100 &required-port2-msn-pps=CUSTOM_VNIC_TYPE_NORMAL &resources-port2-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:100,NET_BW_IGR_KILOBIT_PER_SEC:100 &required-port2-msn-bw=CUSTOM_VNIC_TYPE_NORMAL &required-port2-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2 &same_subtree=-port2-msn-pps,-port2-msn-bw &group_policy=none status: 200 response_json_paths: $.allocation_requests.`len`: 6 $.allocation_requests..allocations["$ENVIRON['compute1']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['compute1']"].resources[MEMORY_MB]: 1024 $.allocation_requests..allocations["$ENVIRON['compute1']"].resources[DISK_GB]: 10 $.allocation_requests..allocations["$ENVIRON['compute1:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 200 $.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 200 $.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 200 $.allocation_requests..allocations["$ENVIRON['compute2']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['compute2']"].resources[MEMORY_MB]: 1024 $.allocation_requests..allocations["$ENVIRON['compute2']"].resources[DISK_GB]: 10 $.allocation_requests..allocations["$ENVIRON['compute2:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 200 $.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 200 $.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 200 $.allocation_requests..allocations["$ENVIRON['compute3']"].resources[VCPU]: [1, 1, 1, 1] $.allocation_requests..allocations["$ENVIRON['compute3']"].resources[MEMORY_MB]: [1024, 1024, 1024, 1024] $.allocation_requests..allocations["$ENVIRON['compute3']"].resources[DISK_GB]: [10, 10, 10, 10] $.allocation_requests..allocations["$ENVIRON['compute3:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: [200, 200, 200, 200] # So the 4 candidate from compute3 are # * both ports allocate from br_ex so br_ex has a consumption of 100 + 100, # then br_ex2 is not in the candidate (this is why the br_ex2 lists have only 3 items) # * both ports allocate from br_ex2 then br_ex is not in the candidate (this is why the br_ex lists have only 3 items) # * port1 allocates 100 from br_ex, port2 allocates 100 from br_ex2 # * port2 allocates 100 from br_ex, port1 allocates 100 from br_ex2 # As the candidates are in random order the right-hand side needs to list all possible permutations $.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: /[100, 100, 200]|[100, 200, 100]|[200, 100, 100]/ $.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: /[100, 100, 200]|[100, 200, 100]|[200, 100, 100]/ $.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: /[100, 100, 200]|[100, 200, 100]|[200, 100, 100]/ $.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: /[100, 100, 200]|[100, 200, 100]|[200, 100, 100]/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-any-traits.yaml0000664000175000017500000000705000000000000033725 0ustar00zuulzuul00000000000000fixtures: - GranularFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement latest tests: - name: the 'in:' trait query is not supported yet GET: /allocation_candidates?required=in:CUSTOM_FOO,HW_CPU_X86_MMX&resources=VCPU:1 request_headers: openstack-api-version: placement 1.38 status: 400 response_strings: - "The format 'in:HW_CPU_X86_VMX,CUSTOM_MAGIC' only supported since microversion 1.39." - name: the 'in:' trait query is not supported yet in named request group GET: /allocation_candidates?requiredX=in:CUSTOM_FOO,HW_CPU_X86_MMX&resourcesX=VCPU:1 request_headers: openstack-api-version: placement 1.38 status: 400 response_strings: - "The format 'in:HW_CPU_X86_VMX,CUSTOM_MAGIC' only supported since microversion 1.39." - name: the second required field overwrites the first # The fixture has one RP for each trait but no RP for both traits. # As the second 'required' key overwrites the first in <= 1.38 we expect # that one of that RPs will be returned. GET: /allocation_candidates?required=CUSTOM_FOO&required=HW_CPU_X86_MMX&resources=VCPU:1 request_headers: openstack-api-version: placement 1.38 status: 200 response_json_paths: $.allocation_requests.`len`: 1 - name: the second required field overwrites the first in named groups # The fixture has one RP for each trait but no RP for both traits. # As the second 'required' key overwrites the first in <= 1.38 we expect # that one of that RPs will be returned. GET: /allocation_candidates?requiredX=CUSTOM_FOO&requiredX=HW_CPU_X86_MMX&resourcesX=VCPU:1 request_headers: openstack-api-version: placement 1.38 status: 200 response_json_paths: $.allocation_requests.`len`: 1 - name: get candidates with both OR, AND, and NOT trait queries # DXVA or TLS would allow all the trees, AVX filters that down to the left # and the middle but FOO forbids left so middle remains. Middle has access # to two shared disk provider so the query returns two candidates GET: /allocation_candidates?required=in:HW_GPU_API_DXVA,HW_NIC_ACCEL_TLS&required=HW_CPU_X86_AVX,!CUSTOM_FOO&resources=VCPU:1,DISK_GB:1 status: 200 response_json_paths: $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN_MIDDLE']"].resources[VCPU]: [1, 1] $.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: 1 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB]: 1 - name: get candidates with multiple OR queries # The left tree has neither MMX nor TLS, so it is out. The middle tree has # TLS and SSD via shr_disk_1 so that is match. The right tree has MMX and SSD # on the root so that is a match, but it can also get DISK from shr_disk_2 # even if it is not SSD (the SSD trait and the DISK_GB resource are not tight # together in any way in placement) GET: /allocation_candidates?required=in:HW_CPU_X86_MMX,HW_NIC_ACCEL_TLS&required=in:CUSTOM_DISK_SSD,CUSTOM_FOO&resources=VCPU:1,DISK_GB:1 status: 200 response_json_paths: $.allocation_requests.`len`: 3 $.allocation_requests..allocations["$ENVIRON['CN_MIDDLE']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: 1 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[VCPU]: [1, 1] $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[DISK_GB]: 1 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB]: 1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-bug-1792503.yaml0000664000175000017500000002364400000000000033246 0ustar00zuulzuul00000000000000# Tests of allocation candidates API fixtures: - NUMAAggregateFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement 1.32 tests: - name: get allocation candidates without aggregate GET: /allocation_candidates?resources=VCPU:1 response_json_paths: $.allocation_requests.`len`: 4 $.allocation_requests..allocations["$ENVIRON['NUMA1_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_2_UUID']"].resources.VCPU: 1 - name: get allocation candidates with aggregate A GET: /allocation_candidates?resources=VCPU:1&member_of=$ENVIRON['AGGA_UUID'] response_json_paths: # Aggregate A is on the root rps (both cn1 and cn2) so it spans on the # whole tree. We have full allocations here. $.allocation_requests.`len`: 4 $.allocation_requests..allocations["$ENVIRON['NUMA1_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_2_UUID']"].resources.VCPU: 1 - name: get allocation candidates with aggregate A granular GET: /allocation_candidates?resources1=VCPU:1&member_of1=$ENVIRON['AGGA_UUID'] response_json_paths: # Aggregate A is on the root rps (both cn1 and cn2) so it spans on the # whole tree, but only for the unsuffixed request group. $.allocation_requests.`len`: 0 - name: get allocation candidates with aggregate B GET: /allocation_candidates?resources=VCPU:1&member_of=$ENVIRON['AGGB_UUID'] response_json_paths: # Aggregate B is on the root of cn2 so it spans on the # whole tree including rps of NUMA2_1 and NUMA2_2. $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['NUMA2_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_2_UUID']"].resources.VCPU: 1 - name: get allocation candidates with aggregate C GET: /allocation_candidates?resources=VCPU:1&member_of=$ENVIRON['AGGC_UUID'] response_json_paths: # Aggregate C is *NOT* on the root, so we should get only NUMA1_1 # here that is only the rp in aggregate C. $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_1_UUID']"].resources.VCPU: 1 - name: get allocation candidates with aggregate C granular GET: /allocation_candidates?resources1=VCPU:1&member_of1=$ENVIRON['AGGC_UUID'] response_json_paths: # Aggregate C is only on NUMA1_1. $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_1_UUID']"].resources.VCPU: 1 - name: get allocation candidates with shared storage GET: /allocation_candidates?resources=VCPU:1,DISK_GB:1000 response_json_paths: # Since `members_of` query parameter is not specified, sharing rp 1 is # being shared with the *whole* trees of CN1 and CN2. Sharing rp 2 is # being shared with the *whole* tree of CN1. # As a result, there should be 6 allocation candidates: # [ # (numa1-1, ss1), (numa1-2, ss1), (numa2-1, ss1), (numa2-2, ss1), # (numa1-1, ss2), # ] $.allocation_requests.`len`: 6 $.allocation_requests..allocations["$ENVIRON['NUMA1_1_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['NUMA1_2_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['NUMA2_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [1000, 1000, 1000, 1000] $.allocation_requests..allocations["$ENVIRON['SS2_UUID']"].resources.DISK_GB: [1000, 1000] - name: get allocation candidates with shared storage with aggregate A GET: /allocation_candidates?resources=VCPU:1,DISK_GB:1000&member_of=$ENVIRON['AGGA_UUID'] response_json_paths: $.allocation_requests.`len`: 4 # Since aggregate A is specified, which is on the root CN1, sharing # rp 1 can be allocation candidates with the *whole* trees in CN1. # Sharing rp 2 can't in the allocation candidates since it is not # under aggregate A but under aggregate C. # As a result, there should be 4 allocation candidates: # [ # (numa1-1, ss1), (numa1-2, ss1), (numa2-1, ss1), (numa2-2, ss1) # ] $.allocation_requests..allocations["$ENVIRON['NUMA1_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [1000, 1000, 1000, 1000] - name: get allocation candidates with shared storage with aggregate B GET: /allocation_candidates?resources=VCPU:1,DISK_GB:1000&member_of=$ENVIRON['AGGB_UUID'] response_json_paths: # We don't have shared disk in aggregate B. $.allocation_requests.`len`: 0 - name: get allocation candidates with shared storage with aggregate C GET: /allocation_candidates?resources=VCPU:1,DISK_GB:1000&member_of=$ENVIRON['AGGC_UUID'] response_json_paths: # Since aggregate C is specified, which is on *non-root*, NUMA1_1, # sharing provider 2 is not shared with the whole tree. It is shared # with rps only with aggregate C for their own (opposite to not on root). # As a result, there should be 1 allocation candidate: # [ # (numa1-1, ss2), # ] $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['SS2_UUID']"].resources.DISK_GB: 1000 # Tests for negative aggregate membership from microversion 1.32. # The negative aggregate feature had not yet been implemented when bug1792503 # was reported, but we include the tests here to make sure that it is # consistent with the positive aggregate strategy with nested providers above. - name: get allocation candidates with shared storage without aggregate A GET: /allocation_candidates?resources=VCPU:1,DISK_GB:1000&member_of=!$ENVIRON['AGGA_UUID'] response_json_paths: # Aggregate A is on the root rps (both cn1 and cn2) so it spans on the # whole tree. We have no allocation requests here. $.allocation_requests.`len`: 0 - name: get allocation candidates with shared storage without aggregate B GET: /allocation_candidates?resources=VCPU:1,DISK_GB:1000&member_of=!$ENVIRON['AGGB_UUID'] response_json_paths: # Aggregate B is on the root of cn2 and it spans on the whole tree # including rps of NUMA2_1 and NUMA2_2 so we exclude them. # As a result, there should be 4 allocation candidates: # [ # (numa1-1, ss1), (numa1-2, ss1), # (numa1-1, ss2), (numa1-2, ss2), # ] $.allocation_requests.`len`: 4 $.allocation_requests..allocations["$ENVIRON['NUMA1_1_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['NUMA1_2_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [1000, 1000] $.allocation_requests..allocations["$ENVIRON['SS2_UUID']"].resources.DISK_GB: [1000, 1000] - name: get allocation candidates with shared storage without aggregate C GET: /allocation_candidates?resources=VCPU:1,DISK_GB:1000&member_of=!$ENVIRON['AGGC_UUID'] response_json_paths: # Aggregate C is *NOT* on the root. We should exclude NUMA1_1 and SS2, # but we should get NUMA1_2 # [ # (numa1-2, ss1), (numa2-1, ss1), (numa2-2, ss1) # ] $.allocation_requests.`len`: 3 $.allocation_requests..allocations["$ENVIRON['NUMA1_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [1000, 1000, 1000] - name: get allocation candidates with shared storage in (aggA or aggB) and (not aggC) GET: /allocation_candidates?resources=VCPU:1,DISK_GB:1000&member_of=in:$ENVIRON['AGGA_UUID'],$ENVIRON['AGGB_UUID']&member_of=!$ENVIRON['AGGC_UUID'] response_json_paths: # Aggregate C is *NOT* on the root. We should exclude NUMA1_1 and SS2, # but we should get NUMA1_2 # [ # (numa1-2, ss1), (numa2-1, ss1), (numa2-2, ss1) # ] $.allocation_requests.`len`: 3 $.allocation_requests..allocations["$ENVIRON['NUMA1_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA2_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [1000, 1000, 1000] - name: get allocation candidates with shared storage neither in aggB nor in aggC but in aggA GET: /allocation_candidates?resources=VCPU:1,DISK_GB:1000&member_of=$ENVIRON['AGGA_UUID']&member_of=!in:$ENVIRON['AGGB_UUID'],$ENVIRON['AGGC_UUID'] response_json_paths: # Aggregate B is on the root. We should exclude all the rps on CN2 # Aggregate C is *NOT* on the root. We should exclude NUMA1_1 and SS2, # but we should get NUMA1_1 # [ # (numa1-1, ss1) # ] $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: 1000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-legacy-rbac.yaml0000664000175000017500000000406000000000000034001 0ustar00zuulzuul00000000000000--- fixtures: - LegacyRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: system admin can get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *system_admin_headers status: 200 - name: system reader cannot get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *system_reader_headers status: 403 - name: project admin can get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *project_admin_headers status: 200 - name: project member cannot get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *project_member_headers status: 403 - name: project reader cannot allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *project_reader_headers status: 403 ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-mappings-numa.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-mappings-numa.ya0000664000175000017500000001270100000000000034054 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Tests for allocation request mappings when using nested providers. fixtures: # See the layout diagram in this fixture's docstring in ../fixtures.py - NUMANetworkFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json # 1.34 is the microversion at which mappings are expected openstack-api-version: placement 1.34 tests: - name: simple mapping non granular GET: /allocation_candidates query_parameters: resources: VCPU:1 response_json_paths: $.allocation_requests.`len`: 3 $.provider_summaries.`len`: 23 # keys are allocations, mappings $.allocation_requests[0].`len`: 2 $.allocation_requests[0].mappings[''].`len`: 1 $.allocation_requests[0].mappings[''][0]: /$ENVIRON['CN2_UUID']|$ENVIRON['NUMA0_UUID']|$ENVIRON['NUMA1_UUID']/ - name: no mappings in 1.33 GET: /allocation_candidates query_parameters: resources: VCPU:1 request_headers: openstack-api-version: placement 1.33 response_json_paths: $.allocation_requests.`len`: 3 $.provider_summaries.`len`: 23 # keys are solely 'allocations' $.allocation_requests[0].`len`: 1 - name: simple isolated mapping GET: /allocation_candidates query_parameters: resources_LEFT: VCPU:1 resources_RIGHT: VCPU:1 group_policy: isolate response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 12 $.allocation_requests[0].mappings.`len`: 2 $.allocation_requests[0].mappings['_LEFT'][0]: /$ENVIRON['NUMA0_UUID']|$ENVIRON['NUMA1_UUID']/ $.allocation_requests[0].mappings['_RIGHT'][0]: /$ENVIRON['NUMA1_UUID']|$ENVIRON['NUMA0_UUID']/ - name: granular plus not granular GET: /allocation_candidates query_parameters: required_NET1: CUSTOM_PHYSNET1 resources_NET1: NET_BW_EGR_KILOBIT_PER_SEC:10 required_NET2: CUSTOM_PHYSNET2 resources_NET2: NET_BW_EGR_KILOBIT_PER_SEC:20 resources: VCPU:1 group_policy: isolate response_json_paths: # two candidates, one for each NUMA node providing VCPU $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 12 # 3 members of the mappings dict $.allocation_requests[0].mappings.`len`: 3 # One member of each list in the mappings $.allocation_requests[0].mappings[''].`len`: 1 $.allocation_requests[0].mappings._NET1.`len`: 1 $.allocation_requests[0].mappings._NET2.`len`: 1 $.allocation_requests[0].mappings[''][0]: /$ENVIRON['NUMA0_UUID']|$ENVIRON['NUMA1_UUID']/ $.allocation_requests[0].mappings._NET1[0]: $ENVIRON['ESN1_UUID'] $.allocation_requests[0].mappings._NET2[0]: $ENVIRON['ESN2_UUID'] - name: non isolated shows both request groups for the request that combines the resources GET: /allocation_candidates query_parameters: # Two chunks of bandwidth on the same network. We pick PHYSNET1 because # only one provider with bandwidth resource has that trait (ESN1). This, # with group_policy=none, forces the resources to be consolidated onto # that one provider. We need to show that the mappings accurately reflect # both request groups. resources_BWA: NET_BW_EGR_KILOBIT_PER_SEC:10 required_BWA: CUSTOM_PHYSNET1 resources_BWB: NET_BW_EGR_KILOBIT_PER_SEC:20 required_BWB: CUSTOM_PHYSNET1 group_policy: none response_json_paths: $.allocation_requests.`len`: 1 $.provider_summaries.`len`: 12 # Fix for https://storyboard.openstack.org/#!/story/2006068 # We should get a mapping from each request group to ESN1: $.allocation_requests[0].mappings: _BWA: ["$ENVIRON['ESN1_UUID']"] _BWB: ["$ENVIRON['ESN1_UUID']"] # Confirm that a resource provider which provides two different classes # of inventory only shows up in a mapping for any suffix once. - name: granular two resources on one suffix GET: /allocation_candidates query_parameters: required_NET1: CUSTOM_PHYSNET1 resources_NET1: NET_BW_EGR_KILOBIT_PER_SEC:10 required_NET2: CUSTOM_PHYSNET2 resources_NET2: NET_BW_EGR_KILOBIT_PER_SEC:20 resources_COMPUTE: VCPU:1,MEMORY_MB:1024 group_policy: isolate response_json_paths: # two candidates, one for each NUMA node providing _COMPUTE $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 12 # 3 members of the mappings dict $.allocation_requests[0].mappings.`len`: 3 # One member of each list in the mappings $.allocation_requests[0].mappings._COMPUTE.`len`: 1 $.allocation_requests[0].mappings._NET1.`len`: 1 $.allocation_requests[0].mappings._NET2.`len`: 1 $.allocation_requests[0].mappings._COMPUTE[0]: /$ENVIRON['NUMA0_UUID']|$ENVIRON['NUMA1_UUID']/ $.allocation_requests[0].mappings._NET1[0]: $ENVIRON['ESN1_UUID'] $.allocation_requests[0].mappings._NET2[0]: $ENVIRON['ESN2_UUID'] ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-mappings-sharing.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-mappings-sharing0000664000175000017500000000551400000000000034143 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Tests for allocation request mappings. fixtures: # See the layout diagram in this fixture's docstring in ../fixtures.py - GranularFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json # 1.34 is the microversion at which mappings are expected openstack-api-version: placement 1.34 tests: - name: simple mapping non granular GET: /allocation_candidates query_parameters: resources: VCPU:1 required: HW_CPU_X86_SSE response_json_paths: $.allocation_requests.`len`: 1 $.provider_summaries.`len`: 1 $.allocation_requests[0].allocations["$ENVIRON['CN_MIDDLE']"].resources: VCPU: 1 $.allocation_requests[0].mappings: "": - $ENVIRON['CN_MIDDLE'] - name: simple mapping with shared GET: /allocation_candidates query_parameters: resources: VCPU:1,DISK_GB:1 required: HW_CPU_X86_SSE response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 3 $.allocation_requests[0].allocations["$ENVIRON['CN_MIDDLE']"].resources: VCPU: 1 # We can't cleanly test for which providers will show up in which # mappings in this request, so instead we confirm the size. Other tests # cover which suitably. $.allocation_requests[0].mappings.`len`: 1 $.allocation_requests[0].mappings[""].`len`: 2 $.allocation_requests[1].mappings.`len`: 1 $.allocation_requests[1].mappings[""].`len`: 2 - name: group mapping with shared GET: /allocation_candidates query_parameters: resources: VCPU:1 resources_DISK_A: DISK_GB:1 resources_DISK_B: DISK_GB:1 required: HW_CPU_X86_SSE group_policy: isolate response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 3 $.allocation_requests[0].allocations["$ENVIRON['CN_MIDDLE']"].resources: VCPU: 1 $.allocation_requests[0].mappings.`len`: 3 $.allocation_requests[0].mappings[""][0]: $ENVIRON['CN_MIDDLE'] $.allocation_requests[0].mappings['_DISK_A'][0]: /(?:$ENVIRON['SHR_DISK_1']|$ENVIRON['SHR_DISK_2'])/ $.allocation_requests[0].mappings['_DISK_B'][0]: /(?:$ENVIRON['SHR_DISK_1']|$ENVIRON['SHR_DISK_2'])/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-member-of.yaml0000664000175000017500000002322600000000000033506 0ustar00zuulzuul00000000000000# Tests of allocation candidates API fixtures: - NonSharedStorageFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json openstack-api-version: placement 1.24 tests: - name: get bad member_of microversion GET: /allocation_candidates?resources=VCPU:1&member_of=in:$ENVIRON['AGGA_UUID'],$ENVIRON['AGGB_UUID'] request_headers: openstack-api-version: placement 1.18 status: 400 response_strings: - Invalid query string parameters - "'member_of' was unexpected" - name: get allocation candidates invalid member_of value GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=INVALID_UUID status: 400 response_strings: - Expected 'member_of' parameter to contain valid UUID(s). - name: get allocation candidates no 'in:' for multiple member_of GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=$ENVIRON['AGGA_UUID'],$ENVIRON['AGGB_UUID'] status: 400 response_strings: - Multiple values for 'member_of' must be prefixed with the 'in:' or '!in:' keyword using the valid microversion. - name: get allocation candidates multiple member_of with 'in:' but invalid values GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=in:$ENVIRON['AGGA_UUID'],INVALID_UUID status: 400 response_strings: - Expected 'member_of' parameter to contain valid UUID(s). - name: get allocation candidates multiple member_of with 'in:' but no aggregates GET: /allocation_candidates?&member_of=in:&resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 status: 400 response_strings: - Expected 'member_of' parameter to contain valid UUID(s). - name: get allocation candidates with no match for member_of GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=$ENVIRON['AGGA_UUID'] status: 200 response_json_paths: $.allocation_requests.`len`: 0 - name: get compute node 1 state GET: /resource_providers/$ENVIRON['CN1_UUID'] - name: associate the first compute node with aggA PUT: /resource_providers/$ENVIRON['CN1_UUID']/aggregates data: aggregates: - $ENVIRON['AGGA_UUID'] resource_provider_generation: $HISTORY['get compute node 1 state'].$RESPONSE['$.generation'] status: 200 - name: verify that the member_of call now returns 1 allocation_candidate GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=in:$ENVIRON['AGGA_UUID'],$ENVIRON['AGGB_UUID'] status: 200 response_json_paths: $.allocation_requests.`len`: 1 - name: get compute node 2 state GET: /resource_providers/$ENVIRON['CN2_UUID'] - name: associate the second compute node with aggB PUT: /resource_providers/$ENVIRON['CN2_UUID']/aggregates data: aggregates: - $ENVIRON['AGGB_UUID'] resource_provider_generation: $HISTORY['get compute node 2 state'].$RESPONSE['$.generation'] status: 200 - name: verify that the member_of call now returns both RPs GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=in:$ENVIRON['AGGA_UUID'],$ENVIRON['AGGB_UUID'] status: 200 response_json_paths: $.allocation_requests.`len`: 2 - name: verify that aggC still returns no RPs GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=$ENVIRON['AGGC_UUID'] status: 200 response_json_paths: $.allocation_requests.`len`: 0 - name: get current compute node 1 state GET: /resource_providers/$ENVIRON['CN1_UUID'] - name: now associate the first compute node with both aggA and aggC PUT: /resource_providers/$ENVIRON['CN1_UUID']/aggregates data: aggregates: - $ENVIRON['AGGA_UUID'] - $ENVIRON['AGGC_UUID'] resource_provider_generation: $HISTORY['get current compute node 1 state'].$RESPONSE['$.generation'] - name: verify that the member_of call for aggs A and B still returns 2 allocation_candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=in:$ENVIRON['AGGA_UUID'],$ENVIRON['AGGB_UUID'] status: 200 response_json_paths: $.allocation_requests.`len`: 2 - name: verify microversion fail for multiple member_of params GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=$ENVIRON['AGGA_UUID']&member_of=$ENVIRON['AGGB_UUID'] request_headers: openstack-api-version: placement 1.23 status: 400 response_strings: - 'Multiple member_of parameters are not supported' response_json_paths: $.errors[0].title: Bad Request - name: verify that no RP is associated with BOTH aggA and aggB GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=$ENVIRON['AGGA_UUID']&member_of=$ENVIRON['AGGB_UUID'] status: 200 response_json_paths: $.allocation_requests.`len`: 0 - name: associate the second compute node with aggA and aggB PUT: /resource_providers/$ENVIRON['CN2_UUID']/aggregates data: aggregates: - $ENVIRON['AGGA_UUID'] - $ENVIRON['AGGB_UUID'] resource_provider_generation: $HISTORY['associate the second compute node with aggB'].$RESPONSE['$.resource_provider_generation'] status: 200 - name: verify that second RP is associated with BOTH aggA and aggB GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=$ENVIRON['AGGA_UUID']&member_of=$ENVIRON['AGGB_UUID'] status: 200 response_json_paths: $.allocation_requests.`len`: 1 # Tests for negative aggregate membership from microversion 1.32 # Now the aggregation map is as below # { # CN1: [AGGA, AGGC], # CN2: [AGGA, AGGB], # CN3: [] # } - name: negative agg error on old microversion with ! prefix GET: /allocation_candidates?resources=VCPU:1&member_of=!$ENVIRON['AGGA_UUID'] status: 400 request_headers: openstack-api-version: placement 1.31 response_strings: - "Forbidden member_of parameters are not supported in the specified microversion" - name: negative agg error on old microversion with !in prefix GET: /allocation_candidates?resources=VCPU:1&member_of=!in:$ENVIRON['AGGA_UUID'] status: 400 request_headers: openstack-api-version: placement 1.31 response_strings: - "Forbidden member_of parameters are not supported in the specified microversion" - name: negative agg error on orphaned queryparam GET: /allocation_candidates?member_of=!$ENVIRON['AGGA_UUID'] status: 400 request_headers: openstack-api-version: placement 1.32 response_strings: - "All member_of parameters must be associated with resources" - name: negative agg error on invalid agg GET: /allocation_candidates?resources=VCPU:1&member_of=!(^o^) status: 400 request_headers: openstack-api-version: placement 1.32 response_strings: - "Invalid query string parameters: Expected 'member_of' parameter to contain valid UUID(s)." - name: negative agg error on invalid usage of in prefix GET: /allocation_candidates?resources=VCPU:1&member_of=in:$ENVIRON['AGGA_UUID'],!$ENVIRON['AGGB_UUID'] status: 400 request_headers: openstack-api-version: placement 1.32 response_strings: - "Invalid query string parameters: Expected 'member_of' parameter to contain valid UUID(s)." - name: negative agg GET: /allocation_candidates?resources=VCPU:1&member_of=!$ENVIRON['AGGC_UUID'] status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # CN1 is excluded $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['CN3_UUID']"].resources.VCPU: 1 - name: negative agg multiple GET: /allocation_candidates?resources=VCPU:1&member_of=!in:$ENVIRON['AGGB_UUID'],$ENVIRON['AGGC_UUID'] status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # Both CN1 and CN2 are excluded $.allocation_requests.`len`: 1 $.provider_summaries.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN3_UUID']"].resources.VCPU: 1 - name: negative agg with positive agg GET: /allocation_candidates?resources=VCPU:1&member_of=!$ENVIRON['AGGB_UUID']&member_of=$ENVIRON['AGGC_UUID'] status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # Only CN1 is returned $.allocation_requests.`len`: 1 $.provider_summaries.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: 1 - name: negative agg multiple with positive agg GET: /allocation_candidates?resources=VCPU:1&member_of=!in:$ENVIRON['AGGB_UUID'],$ENVIRON['AGGC_UUID']&member_of=$ENVIRON['AGGA_UUID'] status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # no rp is returned $.allocation_requests.`len`: 0 $.provider_summaries.`len`: 0 # This request is equivalent to the one in "negative agg with positive agg" - name: negative agg with the same agg on positive get rp GET: /allocation_candidates?resources=VCPU:1&member_of=!$ENVIRON['AGGB_UUID']&member_of=in:$ENVIRON['AGGB_UUID'],$ENVIRON['AGGC_UUID'] status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: $.allocation_requests.`len`: 1 $.provider_summaries.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: 1 - name: negative agg with the same agg on positive no rp GET: /allocation_candidates?resources=VCPU:1&member_of=!$ENVIRON['AGGB_UUID']&member_of=$ENVIRON['AGGB_UUID'] status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # no rp is returned $.allocation_requests.`len`: 0 $.provider_summaries.`len`: 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-policy.yaml0000664000175000017500000000076000000000000033132 0ustar00zuulzuul00000000000000# This tests GET /allocation_candidates using a non-admin # user with an open policy configuration. The response validation is # intentionally minimal. fixtures: - OpenPolicyFixture defaults: request_headers: x-auth-token: user accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 status: 200 ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-root-required.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-root-required.ya0000664000175000017500000003435200000000000034107 0ustar00zuulzuul00000000000000# Tests of allocation candidates API with root_required fixtures: - NUMANetworkFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement 1.35 tests: - name: root_required before microversion GET: /allocation_candidates?resources=VCPU:1&root_required=HW_CPU_X86_AVX2 request_headers: openstack-api-version: placement 1.34 status: 400 response_strings: - Invalid query string parameters - "'root_required' does not match any of the regexes" - name: conflicting required and forbidden GET: /allocation_candidates?resources=VCPU:1&root_required=HW_CPU_X86_AVX2,HW_CPU_X86_SSE,!HW_CPU_X86_AVX2 status: 400 response_strings: - "Conflicting required and forbidden traits found in root_required: HW_CPU_X86_AVX2" response_json_paths: errors[0].code: placement.query.bad_value - name: nonexistent required GET: /allocation_candidates?resources=VCPU:1&root_required=CUSTOM_NO_EXIST,HW_CPU_X86_SSE,!HW_CPU_X86_AVX status: 400 response_strings: - "No such trait(s): CUSTOM_NO_EXIST" - name: nonexistent forbidden GET: /allocation_candidates?resources=VCPU:1&root_required=!CUSTOM_NO_EXIST,HW_CPU_X86_SSE,!HW_CPU_X86_AVX status: 400 response_strings: - "No such trait(s): CUSTOM_NO_EXIST" - name: multiple root_required is an error GET: /allocation_candidates?resources=VCPU:1&root_required=MISC_SHARES_VIA_AGGREGATE&root_required=!HW_NUMA_ROOT status: 400 response_strings: - Query parameter 'root_required' may be specified only once. response_json_paths: errors[0].code: placement.query.duplicate_key - name: no hits for a required trait that is on children in one tree and absent from the other GET: /allocation_candidates?resources=VCPU:1&root_required=HW_NUMA_ROOT status: 200 response_json_paths: # No root has HW_NUMA_ROOT $.allocation_requests.`len`: 0 - name: required trait on a sharing root GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=MISC_SHARES_VIA_AGGREGATE status: 200 response_json_paths: # MISC_SHARES is on the sharing root, but not on any of the anchor roots $.allocation_requests.`len`: 0 - name: root_required trait on children GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=HW_NUMA_ROOT status: 200 response_json_paths: # HW_NUMA_ROOT is on child providers, not on any root $.allocation_requests.`len`: 0 - name: required trait not on any provider GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=HW_CPU_X86_AVX2 status: 200 response_json_paths: # HW_CPU_X86_AVX2 isn't anywhere in the env. $.allocation_requests.`len`: 0 - name: limit to multiattach-capable unsuffixed no sharing GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024&root_required=COMPUTE_VOLUME_MULTI_ATTACH status: 200 response_json_paths: # We only get results from cn1 because only it has MULTI_ATTACH # We get candidates where VCPU and MEMORY_MB are provided by the same or # alternate NUMA roots. $.allocation_requests.`len`: 4 $.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.MEMORY_MB: [1024, 1024] $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.MEMORY_MB: [1024, 1024] - name: limit to multiattach-capable separate granular no isolate no sharing GET: /allocation_candidates?resources1=VCPU:1&resources2=MEMORY_MB:1024&group_policy=none&root_required=COMPUTE_VOLUME_MULTI_ATTACH status: 200 response_json_paths: # Same as above $.allocation_requests.`len`: 4 # Prove we didn't break provider summaries $.provider_summaries["$ENVIRON['NUMA0_UUID']"].resources[VCPU][capacity]: 4 $.provider_summaries["$ENVIRON['NUMA1_UUID']"].resources[MEMORY_MB][capacity]: 2048 - name: limit to multiattach-capable separate granular isolate no sharing GET: /allocation_candidates?resources1=VCPU:1&resources2=MEMORY_MB:1024&group_policy=isolate&root_required=COMPUTE_VOLUME_MULTI_ATTACH status: 200 response_json_paths: # Now we (perhaps unrealistically) only get candidates where VCPU and # MEMORY_MB are on alternate NUMA roots. $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.MEMORY_MB: 1024 $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.MEMORY_MB: 1024 - name: limit to multiattach-capable unsuffixed sharing GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=COMPUTE_VOLUME_MULTI_ATTACH status: 200 response_json_paths: # We only get results from cn1 because only it has MULTI_ATTACH # We get candidates where VCPU and MEMORY_MB are provided by the same or # alternate NUMA roots. DISK_GB is always provided by the sharing provider. $.allocation_requests.`len`: 4 $.provider_summaries["$ENVIRON['NUMA0_UUID']"].traits: - HW_NUMA_ROOT $.provider_summaries["$ENVIRON['NUMA1_UUID']"].traits: - HW_NUMA_ROOT - CUSTOM_FOO - name: limit to multiattach-capable granular sharing GET: /allocation_candidates?resources1=VCPU:1,MEMORY_MB:1024&resources2=DISK_GB:100&&group_policy=none&root_required=COMPUTE_VOLUME_MULTI_ATTACH status: 200 response_json_paths: # We only get results from cn1 because only it has MULTI_ATTACH # We only get candidates where VCPU and MEMORY_MB are provided by the same # NUMA root, because requested in the same suffixed group. DISK_GB is # always provided by the sharing provider. $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.MEMORY_MB: 1024 $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.MEMORY_MB: 1024 - name: trait exists on root and child in separate trees case 1 unsuffixed required GET: /allocation_candidates?resources=VCPU:1,DISK_GB:100&required=CUSTOM_FOO status: 200 response_json_paths: # We get a candidates from cn2 and cn2+ss1 because cn2 has all the # resources and the trait. # We get a candidate from numa1+ss1 because (even in the unsuffixed group) # regular `required` is tied to the resource in that group. $.allocation_requests.`len`: 3 $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [100, 100] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100 - name: trait exists on root and child in separate trees case 2 unsuffixed root_required GET: /allocation_candidates?resources=VCPU:1,DISK_GB:100&root_required=CUSTOM_FOO status: 200 response_json_paths: # We only get candidates from cn2 and cn2+ss1 because only cn2 has FOO on # the root $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: 100 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100 - name: trait exists on root and child in separate trees case 3 suffixed required GET: /allocation_candidates?resources1=VCPU:1&required1=CUSTOM_FOO&resources2=DISK_GB:100&group_policy=none status: 200 response_json_paths: # We get a candidates from cn2 because has all the resources and the trait; # and from cn2+ss1 because group_policy=none and the required trait is on # the group with the VCPU. # We get a candidate from numa1+ss1 because the required trait is on the # group with the VCPU. $.allocation_requests.`len`: 3 $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [100, 100] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100 - name: trait exists on root and child in separate trees case 4 suffixed root_required GET: /allocation_candidates?resources1=VCPU:1&resources2=DISK_GB:100&group_policy=none&root_required=CUSTOM_FOO status: 200 response_json_paths: # We only get candidates from cn2 and cn2+ss1 because only cn2 has FOO on # the root $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: 100 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100 - name: no filtering for a forbidden trait that is on children in one tree and absent from the other GET: /allocation_candidates?resources=VCPU:3&root_required=!HW_NUMA_ROOT status: 200 response_json_paths: # No root has HW_NUMA_ROOT, so we hit all providers of VCPU with adequate capacity $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 3 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 3 - name: forbidden trait on a sharing root GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=!MISC_SHARES_VIA_AGGREGATE status: 200 response_json_paths: # This does not filter out candidates including the sharing provider, of # which there are five (four from the combinations of VCPU+MEMORY_MB on cn1 # because non-isolated; one using VCPU+MEMORY_MB from cn2). The sixth is # where cn2 provides all the resources. $.allocation_requests.`len`: 6 $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [100, 100, 100, 100, 100] - name: combine required with irrelevant forbidden # This time the irrelevant forbidden is on a child provider GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=CUSTOM_FOO,!HW_NUMA_ROOT status: 200 response_json_paths: # This is as above, but filtered to the candidates involving cn2, which has # CUSTOM_FOO on the root. $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.MEMORY_MB: [1024, 1024] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100 $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: 100 - name: redundant required and forbidden GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=CUSTOM_FOO,!COMPUTE_VOLUME_MULTI_ATTACH status: 200 response_json_paths: # Same result as above. The forbidden multi-attach and the required foo are # both doing the same thing. $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.MEMORY_MB: [1024, 1024] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100 $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: 100 - name: forbiddens cancel each other GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=!CUSTOM_FOO,!COMPUTE_VOLUME_MULTI_ATTACH status: 200 response_json_paths: # !foo gets rid of cn2; !multi-attach gets rid of cn1. $.allocation_requests.`len`: 0 - name: isolate foo granular sharing GET: /allocation_candidates?resources1=VCPU:1,MEMORY_MB:1024&resources2=DISK_GB:100&&group_policy=none&root_required=!CUSTOM_FOO status: 200 response_json_paths: # We only get results from cn1 because cn2 has the forbidden foo trait. # We only get candidates where VCPU and MEMORY_MB are provided by the same # NUMA root, because requested in the same suffixed group. DISK_GB is # always provided by the sharing provider. $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [100, 100] - name: unsuffixed required and root_required same trait GET: /allocation_candidates?resources=VCPU:1&required=CUSTOM_FOO&root_required=CUSTOM_FOO status: 200 response_json_paths: # required=FOO would have limited us to getting VCPU from numa1 and cn2 # BUT root_required=FOO should further restrict us to just cn2 $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 1 - name: granular required and root_required same trait GET: /allocation_candidates?resources1=VCPU:1&required1=CUSTOM_FOO&root_required=CUSTOM_FOO status: 200 response_json_paths: # same as above $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 1 - name: required positive and root_required negative same trait GET: /allocation_candidates?resources1=VCPU:1&required1=CUSTOM_FOO&root_required=!CUSTOM_FOO status: 200 response_json_paths: # Both numa1 and cn2 match required1=FOO, but since we're forbidding FOO on # the root, we should only get numa1 $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1 - name: required negative and root_required positive same trait GET: /allocation_candidates?resources1=VCPU:1&required1=!CUSTOM_FOO&root_required=CUSTOM_FOO status: 200 response_json_paths: # The only provider of VCPU that doesn't have FOO is numa0. But numa0 is on # cn1, which doesn't have the required FOO on the root. $.allocation_requests.`len`: 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates-secure-rbac.yaml0000664000175000017500000000561500000000000034032 0ustar00zuulzuul00000000000000--- fixtures: - SecureRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &admin_project_id $ENVIRON['ADMIN_PROJECT_ID'] - &service_project_id $ENVIRON['SERVICE_PROJECT_ID'] - &admin_headers x-auth-token: user x-roles: admin x-project-id: admin_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &service_headers x-auth-token: user x-roles: service x-project-id: service_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: admin can get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *admin_headers status: 200 - name: service can get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *service_headers status: 200 - name: system admin cannot get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *system_admin_headers status: 403 - name: system reader cannot get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *system_reader_headers status: 403 - name: project admin can get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *project_admin_headers status: 200 - name: project member cannot get allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *project_member_headers status: 403 - name: project reader cannot allocation candidates GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: *project_reader_headers status: 403 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocation-candidates.yaml0000664000175000017500000006733000000000000031643 0ustar00zuulzuul00000000000000# Tests of allocation candidates API fixtures: - SharedStorageFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement 1.10 tests: - name: list traits GET: /traits status: 200 response_strings: # We at least want to make sure that this trait is supported. - MISC_SHARES_VIA_AGGREGATE - name: get allocation candidates before microversion GET: /allocation_candidates?resources=VCPU:1 request_headers: openstack-api-version: placement 1.8 status: 404 - name: get allocation candidates empty resources GET: /allocation_candidates?resources= status: 400 response_strings: - Badly formed resources parameter. Expected resources query string parameter in form - 'Got: empty string.' - name: get allocation candidates no resources GET: /allocation_candidates status: 400 response_strings: - "'resources' is a required property" - name: get bad resource class GET: /allocation_candidates?resources=MCPU:99 status: 400 response_strings: - Invalid resource class in resources parameter - name: get bad limit microversion GET: /allocation_candidates?resources=VCPU:1&limit=5 request_headers: openstack-api-version: placement 1.15 status: 400 response_strings: - Invalid query string parameters - "'limit' was unexpected" - name: get bad limit type GET: /allocation_candidates?resources=VCPU:1&limit=cow request_headers: openstack-api-version: placement 1.16 status: 400 response_strings: - Invalid query string parameters - "Failed validating 'pattern'" - name: get bad limit value negative GET: /allocation_candidates?resources=VCPU:1&limit=-99 request_headers: openstack-api-version: placement 1.16 status: 400 response_strings: - Invalid query string parameters - "Failed validating 'pattern'" - name: get bad limit value zero GET: /allocation_candidates?resources=VCPU:1&limit=0 request_headers: openstack-api-version: placement 1.16 status: 400 response_strings: - Invalid query string parameters - "Failed validating 'pattern'" - name: get allocation candidates no allocations yet GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 status: 200 response_json_paths: # There are 4 providers involved. 2 compute nodes, 2 shared storage # providers $.provider_summaries.`len`: 4 # There are 5 allocation requests, one combination for each compute # node that provides the VCPU/MEMORY_MB and DISK_GB provided by each # shared storage provider, plus compute node #2 alone $.allocation_requests.`len`: 5 # Verify that compute node #1 only has VCPU and MEMORY_MB listed in the # resource requests. This validates the entire resources key. $.allocation_requests..allocations[?resource_provider.uuid="$ENVIRON['CN1_UUID']"].resources: - VCPU: 1 MEMORY_MB: 1024 - VCPU: 1 MEMORY_MB: 1024 # Verify that compute node #2 has VCPU and MEMORY_MB listed in the # resource requests thrice and DISK_GB once $.allocation_requests..allocations[?resource_provider.uuid="$ENVIRON['CN2_UUID']"].resources[VCPU]: [1, 1, 1] $.allocation_requests..allocations[?resource_provider.uuid="$ENVIRON['CN2_UUID']"].resources[MEMORY_MB]: [1024, 1024, 1024] $.allocation_requests..allocations[?resource_provider.uuid="$ENVIRON['CN2_UUID']"].resources[DISK_GB]: 100 # Verify that shared storage providers only have DISK_GB listed in the # resource requests, but each is listed twice $.allocation_requests..allocations[?resource_provider.uuid="$ENVIRON['SS_UUID']"].resources[DISK_GB]: [100, 100] $.allocation_requests..allocations[?resource_provider.uuid="$ENVIRON['SS2_UUID']"].resources[DISK_GB]: [100, 100] # Verify that the resources listed in the provider summary for compute # node #1 show correct capacity and usage $.provider_summaries["$ENVIRON['CN1_UUID']"].resources[VCPU].capacity: 384 # 16.0 * 24 $.provider_summaries["$ENVIRON['CN1_UUID']"].resources[VCPU].used: 0 $.provider_summaries["$ENVIRON['CN1_UUID']"].resources[MEMORY_MB].capacity: 196608 # 1.5 * 128G $.provider_summaries["$ENVIRON['CN1_UUID']"].resources[MEMORY_MB].used: 0 # Verify that the resources listed in the provider summary for compute # node #2 show correct capacity and usage $.provider_summaries["$ENVIRON['CN2_UUID']"].resources[VCPU].capacity: 384 # 16.0 * 24 $.provider_summaries["$ENVIRON['CN2_UUID']"].resources[VCPU].used: 0 $.provider_summaries["$ENVIRON['CN2_UUID']"].resources[MEMORY_MB].capacity: 196608 # 1.5 * 128G $.provider_summaries["$ENVIRON['CN2_UUID']"].resources[MEMORY_MB].used: 0 $.provider_summaries["$ENVIRON['CN2_UUID']"].resources[DISK_GB].capacity: 1900 # 1.0 * 2000 - 100G $.provider_summaries["$ENVIRON['CN2_UUID']"].resources[DISK_GB].used: 0 # Verify that the resources listed in the provider summary for shared # storage show correct capacity and usage $.provider_summaries["$ENVIRON['SS_UUID']"].resources[DISK_GB].capacity: 1900 # 1.0 * 2000 - 100G $.provider_summaries["$ENVIRON['SS_UUID']"].resources[DISK_GB].used: 0 $.provider_summaries["$ENVIRON['SS2_UUID']"].resources[DISK_GB].capacity: 1900 # 1.0 * 2000 - 100G $.provider_summaries["$ENVIRON['SS2_UUID']"].resources[DISK_GB].used: 0 response_forbidden_headers: # In the default microversion in this file (1.10) the cache headers # are not preset. - cache-control - last-modified # Verify the 1.12 format of the allocation_requests sub object which # changes from a list-list to dict-ish format. - name: get allocation candidates 1.12 dictish GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: openstack-api-version: placement 1.12 response_json_paths: # There are 4 providers involved. 2 compute nodes, 2 shared storage # providers $.provider_summaries.`len`: 4 # There are 5 allocation requests, one combination for each compute # node that provides the VCPU/MEMORY_MB and DISK_GB provided by each # shared storage provider, plus compute node #2 alone $.allocation_requests.`len`: 5 # Verify that compute node #1 only has VCPU and MEMORY_MB listed in the # resource requests. This validates the entire resources key. $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources: - VCPU: 1 MEMORY_MB: 1024 - VCPU: 1 MEMORY_MB: 1024 # Verify that compute node #2 has VCPU and MEMORY_MB listed in the # resource requests thrice and DISK_GB once $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources[VCPU]: [1, 1, 1] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources[MEMORY_MB]: [1024, 1024, 1024] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources[DISK_GB]: 100 # Verify that shared storage providers only have DISK_GB listed in the # resource requests, but each is listed twice $.allocation_requests..allocations["$ENVIRON['SS_UUID']"].resources[DISK_GB]: [100, 100] $.allocation_requests..allocations["$ENVIRON['SS2_UUID']"].resources[DISK_GB]: [100, 100] - name: get allocation candidates cache headers GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 request_headers: # microversion 1.15 to cause cache headers openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: get allocation candidates with limit GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&limit=1 status: 200 request_headers: openstack-api-version: placement 1.16 response_json_paths: $.allocation_requests.`len`: 1 - name: get allocation candidates with multiple limits picks the first one GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&limit=10&limit=1 status: 200 request_headers: openstack-api-version: placement 1.16 response_json_paths: $.allocation_requests.`len`: 5 - name: get allocation candidates with required traits in old version GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required=HW_CPU_X86_SSE status: 400 request_headers: openstack-api-version: placement 1.16 response_strings: - Invalid query string parameters - "'required' was unexpected" - name: get allocation candidates without traits summary in old version GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100 status: 200 request_headers: openstack-api-version: placement 1.16 response_json_paths: $.provider_summaries["$ENVIRON['CN1_UUID']"].`len`: 1 $.provider_summaries["$ENVIRON['CN2_UUID']"].`len`: 1 - name: get allocation candidates with invalid trait GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required=INVALID_TRAIT status: 400 request_headers: openstack-api-version: placement 1.17 response_strings: - No such trait(s) - name: get allocation candidates with empty required value GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required= status: 400 request_headers: openstack-api-version: placement 1.17 response_strings: - "Invalid query string parameters: Expected 'required' parameter value of the form: HW_CPU_X86_VMX,CUSTOM_MAGIC." - name: get allocation candidates with empty required value 1.22 GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required= status: 400 request_headers: openstack-api-version: placement 1.22 response_strings: - "Invalid query string parameters: Expected 'required' parameter value of the form: HW_CPU_X86_VMX,!CUSTOM_MAGIC." - name: get allocation candidates with invalid required value GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required=,, status: 400 request_headers: openstack-api-version: placement 1.17 response_strings: - "Invalid query string parameters: Expected 'required' parameter value of the form: HW_CPU_X86_VMX,CUSTOM_MAGIC." - name: get allocation candidates with forbidden trait pre-forbidden GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required=!CUSTOM_MAGIC status: 400 request_headers: openstack-api-version: placement 1.17 response_strings: - "Invalid query string parameters: Expected 'required' parameter value of the form: HW_CPU_X86_VMX,CUSTOM_MAGIC." - name: get allocation candidates with required trait GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required=HW_CPU_X86_SSE status: 200 request_headers: openstack-api-version: placement 1.17 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 3 $.provider_summaries["$ENVIRON['CN1_UUID']"].`len`: 2 $.provider_summaries["$ENVIRON['CN1_UUID']"].traits.`sorted`: - HW_CPU_X86_SSE - HW_CPU_X86_SSE2 - name: get allocation candidates with forbidden trait GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required=!HW_CPU_X86_SSE status: 200 request_headers: openstack-api-version: placement 1.22 response_json_paths: # There are no allocation requests for CN1. CN2 always satisfies the VCPU/MEMORY_MB. # The disk comes from CN2 or one of the shared storage providers. $.allocation_requests.`len`: 3 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources[VCPU]: [1, 1, 1] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources[MEMORY_MB]: [1024, 1024, 1024] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources[DISK_GB]: 100 $.allocation_requests..allocations["$ENVIRON['SS_UUID']"].resources[DISK_GB]: 100 $.allocation_requests..allocations["$ENVIRON['SS2_UUID']"].resources.DISK_GB: 100 - name: get allocation candidates with multiple required traits GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required=HW_CPU_X86_SSE,HW_CPU_X86_SSE2 status: 200 request_headers: openstack-api-version: placement 1.17 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 3 $.provider_summaries["$ENVIRON['CN1_UUID']"].`len`: 2 $.provider_summaries["$ENVIRON['CN1_UUID']"].traits.`sorted`: - HW_CPU_X86_SSE - HW_CPU_X86_SSE2 - name: get allocation candidates with required trait and no matching GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&required=HW_CPU_X86_SSE3 status: 200 request_headers: openstack-api-version: placement 1.17 response_json_paths: $.allocation_requests.`len`: 0 $.provider_summaries.`len`: 0 # Before microversion 1.27, the ``provider_summaries`` field in the response # of the ``GET /allocation_candidates`` API included inventories of resource # classes that are requested. - name: get allocation candidates provider summaries with requested resource GET: /allocation_candidates?resources=VCPU:1 status: 200 request_headers: openstack-api-version: placement 1.26 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 2 $.provider_summaries["$ENVIRON['CN1_UUID']"].resources.`len`: 1 $.provider_summaries["$ENVIRON['CN1_UUID']"].resources: VCPU: capacity: 384 # 16.0 * 24 used: 0 $.provider_summaries["$ENVIRON['CN2_UUID']"].resources.`len`: 1 $.provider_summaries["$ENVIRON['CN2_UUID']"].resources: VCPU: capacity: 384 # 16.0 * 24 used: 0 # From microversion 1.27, the ``provider_summaries`` field includes # all the resource class inventories regardless of whether it is requested. - name: get allocation candidates provider summaries with all resources GET: /allocation_candidates?resources=VCPU:1 status: 200 request_headers: openstack-api-version: placement 1.27 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 2 $.provider_summaries["$ENVIRON['CN1_UUID']"].resources.`len`: 2 $.provider_summaries["$ENVIRON['CN1_UUID']"].resources: VCPU: capacity: 384 # 16.0 * 24 used: 0 MEMORY_MB: capacity: 196608 # 1.5 * 128G used: 0 $.provider_summaries["$ENVIRON['CN2_UUID']"].resources.`len`: 3 $.provider_summaries["$ENVIRON['CN2_UUID']"].resources: VCPU: capacity: 384 # 16.0 * 24 used: 0 MEMORY_MB: capacity: 196608 # 1.5 * 128G used: 0 DISK_GB: capacity: 1900 # 1.0 * 2000 - 100G used: 0 # Before microversion 1.29, no root/parent uuid is included - name: get allocation candidates no root or parent uuid GET: /allocation_candidates?resources=VCPU:1 status: 200 request_headers: openstack-api-version: placement 1.28 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 2 $.provider_summaries.["$ENVIRON['CN1_UUID']"].`len`: 2 $.provider_summaries.["$ENVIRON['CN2_UUID']"].`len`: 2 - name: get allocation candidates with root and parent uuid GET: /allocation_candidates?resources=VCPU:1 status: 200 request_headers: openstack-api-version: placement 1.29 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 10 $.provider_summaries.["$ENVIRON['CN1_UUID']"].`len`: 4 $.provider_summaries.["$ENVIRON['CN2_UUID']"].`len`: 4 $.provider_summaries.["$ENVIRON['CN1_UUID']"].parent_provider_uuid: null $.provider_summaries.["$ENVIRON['CN1_UUID']"].root_provider_uuid: "$ENVIRON['CN1_UUID']" $.provider_summaries.["$ENVIRON['NUMA1_1_UUID']"].parent_provider_uuid: "$ENVIRON['CN1_UUID']" $.provider_summaries.["$ENVIRON['NUMA1_1_UUID']"].root_provider_uuid: "$ENVIRON['CN1_UUID']" $.provider_summaries.["$ENVIRON['NUMA1_2_UUID']"].parent_provider_uuid: "$ENVIRON['CN1_UUID']" $.provider_summaries.["$ENVIRON['NUMA1_2_UUID']"].root_provider_uuid: "$ENVIRON['CN1_UUID']" $.provider_summaries.["$ENVIRON['PF1_1_UUID']"].parent_provider_uuid: "$ENVIRON['NUMA1_1_UUID']" $.provider_summaries.["$ENVIRON['PF1_1_UUID']"].root_provider_uuid: "$ENVIRON['CN1_UUID']" $.provider_summaries.["$ENVIRON['PF1_2_UUID']"].parent_provider_uuid: "$ENVIRON['NUMA1_2_UUID']" $.provider_summaries.["$ENVIRON['PF1_2_UUID']"].root_provider_uuid: "$ENVIRON['CN1_UUID']" # Before microversion 1.29, it isn't aware of nested providers. # Namely, it can return non-root providers for allocation candidates, - name: get allocation candidates only nested provider old microversion GET: /allocation_candidates?resources=SRIOV_NET_VF:4 status: 200 request_headers: openstack-api-version: placement 1.28 response_json_paths: $.allocation_requests.`len`: 4 $.provider_summaries.`len`: 4 - name: get allocation candidates only nested provider new microversion GET: /allocation_candidates?resources=SRIOV_NET_VF:4 status: 200 request_headers: openstack-api-version: placement 1.29 response_json_paths: $.allocation_requests.`len`: 4 $.provider_summaries.`len`: 10 # ...but it can't return combinations of providers in a tree. - name: get allocation candidates root and nested old microversion GET: /allocation_candidates?resources=VCPU:1,SRIOV_NET_VF:4 status: 200 request_headers: openstack-api-version: placement 1.28 response_json_paths: $.allocation_requests.`len`: 0 $.provider_summaries.`len`: 0 - name: get allocation candidates root and nested new microversion GET: /allocation_candidates?resources=VCPU:1,SRIOV_NET_VF:4 status: 200 request_headers: openstack-api-version: placement 1.29 response_json_paths: $.allocation_requests.`len`: 4 $.provider_summaries.`len`: 10 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['PF1_1_UUID']"].resources.SRIOV_NET_VF: 4 $.allocation_requests..allocations["$ENVIRON['PF1_2_UUID']"].resources.SRIOV_NET_VF: 4 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['PF2_1_UUID']"].resources.SRIOV_NET_VF: 4 $.allocation_requests..allocations["$ENVIRON['PF2_2_UUID']"].resources.SRIOV_NET_VF: 4 - name: get allocation candidates nested limit desc: confirm provider summaries are complete, fixes story/2005859 GET: /allocation_candidates?resources=VCPU:1,SRIOV_NET_VF:4&limit=1 status: 200 request_headers: openstack-api-version: placement 1.29 response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests[0].allocations.`len`: 2 # We expect all the providers that share roots with the allocations. # In this case it the compute node, its two numa nodes and its two pfs. $.provider_summaries.`len`: 5 # Make sure that old microversions can return combinations where # sharing providers are involved - name: get allocation candidates shared and nested old microversion GET: /allocation_candidates?resources=DISK_GB:10,SRIOV_NET_VF:4 status: 200 request_headers: openstack-api-version: placement 1.28 response_json_paths: $.allocation_requests.`len`: 8 $.provider_summaries.`len`: 6 - name: get allocation candidates in tree old microversion GET: /allocation_candidates?resources=VCPU:1,SRIOV_NET_VF:4&in_tree=$ENVIRON['CN1_UUID'] status: 400 request_headers: openstack-api-version: placement 1.30 response_strings: - "Invalid query string parameters" - name: get allocation candidates in tree with invalid uuid GET: /allocation_candidates?resources=VCPU:1,SRIOV_NET_VF:4&in_tree=life-is-beautiful status: 400 request_headers: openstack-api-version: placement 1.31 response_strings: - "Expected 'in_tree' parameter to be a format of uuid" - name: get allocation candidates in tree with root GET: /allocation_candidates?resources=VCPU:1,SRIOV_NET_VF:4&in_tree=$ENVIRON['CN1_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 5 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['PF1_1_UUID']"].resources.SRIOV_NET_VF: 4 $.allocation_requests..allocations["$ENVIRON['PF1_2_UUID']"].resources.SRIOV_NET_VF: 4 - name: get allocation candidates in tree with child GET: /allocation_candidates?resources=VCPU:1,SRIOV_NET_VF:4&in_tree=$ENVIRON['PF1_2_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 5 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['PF1_1_UUID']"].resources.SRIOV_NET_VF: 4 $.allocation_requests..allocations["$ENVIRON['PF1_2_UUID']"].resources.SRIOV_NET_VF: 4 - name: get allocation candidates in tree with shared 1 GET: /allocation_candidates?resources=VCPU:1,DISK_GB:10&in_tree=$ENVIRON['CN1_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: # CN1 has no local disk. SS can't be used since it's out of the CN1 tree. $.allocation_requests.`len`: 0 - name: get allocation candidates in tree with shared 2 GET: /allocation_candidates?resources=VCPU:1,DISK_GB:10&in_tree=$ENVIRON['CN2_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: # CN2 has local disk, but we don't get disk from the sharing providers # because they're not in_tree with CN2. $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 10 - name: get allocation candidates in tree with shared 3 GET: /allocation_candidates?resources=VCPU:1,DISK_GB:10&in_tree=$ENVIRON['SS_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: # SS doesn't have VCPU. $.allocation_requests.`len`: 0 # Test granular scenarios with `in_tree` - name: get allocation candidates in tree granular error orphaned GET: /allocation_candidates?resources=VCPU:1&in_tree1=$ENVIRON['CN1_UUID'] status: 400 request_headers: openstack-api-version: placement 1.31 response_strings: - "All request groups must specify resources." - name: get allocation candidates in_tree root granular root resource GET: /allocation_candidates?resources1=VCPU:1&in_tree1=$ENVIRON['CN1_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: 1 - name: get allocation candidates in_tree child granular root resource GET: /allocation_candidates?resources1=VCPU:1&in_tree1=$ENVIRON['PF1_1_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: 1 - name: get allocation candidates in_tree root granular child resource GET: /allocation_candidates?resources1=SRIOV_NET_VF:4&in_tree1=$ENVIRON['CN1_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['PF1_1_UUID']"].resources.SRIOV_NET_VF: 4 $.allocation_requests..allocations["$ENVIRON['PF1_2_UUID']"].resources.SRIOV_NET_VF: 4 - name: get allocation candidates in_tree child granular child resource GET: /allocation_candidates?resources1=SRIOV_NET_VF:4&in_tree1=$ENVIRON['PF1_1_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['PF1_1_UUID']"].resources.SRIOV_NET_VF: 4 $.allocation_requests..allocations["$ENVIRON['PF1_2_UUID']"].resources.SRIOV_NET_VF: 4 - name: get allocation candidates in tree granular local storage nonexistent GET: /allocation_candidates?resources=VCPU:1&resources1=DISK_GB:10&in_tree1=$ENVIRON['CN1_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: # CN1 has no local storage $.allocation_requests.`len`: 0 - name: get allocation candidates in tree granular local storage exists GET: /allocation_candidates?resources=VCPU:1&resources1=DISK_GB:10&in_tree1=$ENVIRON['CN2_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 10 # Practical usage for "Give me DISK_GB from SS and VCPU from I-don't-care-where" - name: get allocation candidates in tree granular shared storage GET: /allocation_candidates?resources=VCPU:1&resources1=DISK_GB:10&in_tree1=$ENVIRON['SS_UUID'] status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['SS_UUID']"].resources.DISK_GB: [10, 10] # Practical usage for "Give me VCPU from CN1 and DISK_GB from I-don't-care-where" - name: get allocation candidates in tree unnumbered compute granular disk from shared storage only GET: /allocation_candidates?resources=VCPU:1&in_tree=$ENVIRON['CN1_UUID']&resources1=DISK_GB:10 status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: # CN1 has no local storage $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: [1, 1] $.allocation_requests..allocations["$ENVIRON['SS_UUID']"].resources.DISK_GB: 10 $.allocation_requests..allocations["$ENVIRON['SS2_UUID']"].resources.DISK_GB: 10 - name: get allocation candidates in tree unnumbered compute granular disk from shared or local GET: /allocation_candidates?resources=VCPU:1&in_tree=$ENVIRON['CN2_UUID']&resources1=DISK_GB:10 status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: # CN2 has local storage $.allocation_requests.`len`: 3 $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1, 1] $.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 10 $.allocation_requests..allocations["$ENVIRON['SS_UUID']"].resources.DISK_GB: 10 $.allocation_requests..allocations["$ENVIRON['SS2_UUID']"].resources.DISK_GB: 10 # Practical usage for "Give me VCPU from CN1 and DISK_GB from SS" - name: get allocation candidates in tree granular compute and granular shared storage GET: /allocation_candidates?resources1=VCPU:1&in_tree1=$ENVIRON['CN1_UUID']&resources2=DISK_GB:10&in_tree2=$ENVIRON['SS_UUID']&group_policy=none status: 200 request_headers: openstack-api-version: placement 1.31 response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN1_UUID']"].resources.VCPU: 1 $.allocation_requests..allocations["$ENVIRON['SS_UUID']"].resources.DISK_GB: 10 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-1-12.yaml0000664000175000017500000000706100000000000030302 0ustar00zuulzuul00000000000000fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement 1.12 tests: - name: put an allocation listish PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 400 response_strings: - JSON does not validate - name: put resource provider not uuid PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 data: allocations: nice_house_friend: resources: VCPU: 1 DISK_GB: 20 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 400 response_strings: - JSON does not validate - does not match any of the regexes - name: put resource class not valid PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 data: allocations: $ENVIRON['RP_UUID']: resources: vcpu: 1 DISK_GB: 20 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 400 response_strings: - JSON does not validate - does not match any of the regexes - name: put empty allocations PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 data: allocations: {} project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 400 response_strings: - JSON does not validate # jsonschema < 4.23.0 jsonschema >= 4.23.0 - "/(does not have enough properties)|(should be non-empty)/" - name: put unused field PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 bad_field: moo status: 400 response_strings: - JSON does not validate - name: create the resource provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 201 - name: set some inventory PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 min_unit: 10 max_unit: 1024 VCPU: total: 96 status: 200 - name: put an allocation dictish PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: get that allocation GET: $LAST_URL - name: put that same allocation back PUT: $LAST_URL data: # there's a generation in allocations, ignored allocations: $RESPONSE['$.allocations'] # project_id and user_id not in the get response so we add it project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-1-8.yaml0000664000175000017500000001102100000000000030216 0ustar00zuulzuul00000000000000fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement 1.8 tests: - name: put an allocation no project_id or user_id PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 status: 400 response_strings: - Failed validating 'required' in schema - name: put an allocation no project_id PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 user_id: $ENVIRON['USER_ID'] status: 400 response_strings: - Failed validating 'required' in schema - name: put an allocation no user_id PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] status: 400 response_strings: - Failed validating 'required' in schema - name: put an allocation project_id less than min length PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 project_id: "" user_id: $ENVIRON['USER_ID'] status: 400 response_strings: - "Failed validating 'minLength'" - name: put an allocation user_id less than min length PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: "" status: 400 response_strings: - "Failed validating 'minLength'" - name: put an allocation project_id exceeds max length PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 project_id: 78725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b1 user_id: $ENVIRON['USER_ID'] status: 400 response_strings: - "Failed validating 'maxLength'" - name: put an allocation user_id exceeds max length PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: 78725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b1 status: 400 response_strings: - "Failed validating 'maxLength'" - name: create the resource provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 201 - name: post some inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 2048 min_unit: 10 max_unit: 1024 status: 201 - name: put an allocation PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-1.28.yaml0000664000175000017500000002110200000000000030302 0ustar00zuulzuul00000000000000fixtures: - AllocationFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement 1.28 # # Scenarios to test # Start with no consumers # old, no CG = success, consumer gets created # new, no CG = fail, due to schema # new, CG=None = success, consumer gets created # new, CG= = fail # Create an allocation, and with it, a consumer # Now create another allocation # old, no CG = success # new, CG=None = fail # new, CG !match = fail # new, get CG from /allocations # new, CG matches = success tests: - name: old version get nonexistent GET: /allocations/11111111-1111-1111-1111-111111111111 request_headers: openstack-api-version: placement 1.27 response_json_paths: # This is the entire response. There is no generation or proj/user id. $: allocations: {} - name: new version get nonexistent GET: /allocations/22222222-2222-2222-2222-222222222222 response_json_paths: # This is the entire response. There is no generation or proj/user id. $: allocations: {} - name: old version no gen no existing PUT: /allocations/11111111-1111-1111-1111-111111111111 request_headers: openstack-api-version: placement 1.27 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 204 - name: new version no gen no existing PUT: /allocations/22222222-2222-2222-2222-222222222222 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 400 response_strings: - JSON does not validate - name: new version gen is not null no existing PUT: /allocations/22222222-2222-2222-2222-222222222222 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 5 status: 409 response_strings: - consumer generation conflict - expected null but got 5 response_json_paths: $.errors[0].code: placement.concurrent_update - name: new version gen is None no existing PUT: /allocations/22222222-2222-2222-2222-222222222222 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null status: 204 - name: new version any gen no existing PUT: /allocations/33333333-3333-3333-3333-333333333333 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 33 status: 409 response_strings: - consumer generation conflict # Now create an allocation for a specific consumer - name: put an allocation PUT: /allocations/44444444-4444-4444-4444-444444444444 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null status: 204 - name: new version no gen existing PUT: /allocations/44444444-4444-4444-4444-444444444444 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null status: 409 response_strings: - consumer generation conflict - name: get the current consumer generation GET: /allocations/44444444-4444-4444-4444-444444444444 status: 200 - name: new version matching gen existing PUT: /allocations/44444444-4444-4444-4444-444444444444 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: $HISTORY["get the current consumer generation"].$RESPONSE["consumer_generation"] status: 204 - name: new version mismatch gen existing PUT: /allocations/44444444-4444-4444-4444-444444444444 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 12 status: 409 response_strings: - consumer generation conflict response_json_paths: $.errors[0].code: placement.concurrent_update - name: old version no gen existing PUT: /allocations/44444444-4444-4444-4444-444444444444 request_headers: openstack-api-version: placement 1.27 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 204 - name: new version serialization contains consumer generation GET: /allocations/44444444-4444-4444-4444-444444444444 status: 200 response_json_paths: $.consumer_generation: /^\d+$/ - name: empty allocations dict now possible in PUT /allocations/{consumer_uuid} PUT: /allocations/44444444-4444-4444-4444-444444444444 data: allocations: {} project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: $HISTORY["new version serialization contains consumer generation"].$RESPONSE["consumer_generation"] status: 204 - name: old version should now return no allocations for this consumer GET: /allocations/44444444-4444-4444-4444-444444444444 request_headers: openstack-api-version: placement 1.27 status: 200 response_json_paths: # This is the entire response. There is no generation or proj/user id. $: allocations: {} - name: new version should now return no allocations for this consumer GET: /allocations/44444444-4444-4444-4444-444444444444 status: 200 response_json_paths: # This is the entire response. There is no generation or proj/user id. $: allocations: {} # The following tests cover cases where we are putting allocations to # multiple resource providers from one consumer uuid, both a brand new # consumer and an existing one. - name: create shared disk POST: /resource_providers data: name: shared_disker uuid: 8aa83304-4b6d-4a23-b954-06d8b36b206a - name: trait that disk PUT: /resource_providers/8aa83304-4b6d-4a23-b954-06d8b36b206a/traits data: resource_provider_generation: $RESPONSE['$.generation'] traits: - MISC_SHARES_VIA_AGGREGATE - STORAGE_DISK_SSD - name: set disk inventory PUT: /resource_providers/8aa83304-4b6d-4a23-b954-06d8b36b206a/inventories data: inventories: DISK_GB: total: 5000 resource_provider_generation: $RESPONSE['$.resource_provider_generation'] - name: disk in aggregate PUT: /resource_providers/8aa83304-4b6d-4a23-b954-06d8b36b206a/aggregates data: resource_provider_generation: $RESPONSE['$.resource_provider_generation'] aggregates: - 7fade9e1-ab01-4d1b-84db-ac74f740bb42 - name: compute in aggregate PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates request_headers: # avoid generation in aggregates openstack-api-version: placement 1.10 data: - 7fade9e1-ab01-4d1b-84db-ac74f740bb42 - name: get candidates with shared GET: /allocation_candidates?resources=VCPU:1,DISK_GB:200&required=STORAGE_DISK_SSD response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests[0].allocations['$ENVIRON["RP_UUID"]'].resources.VCPU: 1 $.allocation_requests[0].allocations['8aa83304-4b6d-4a23-b954-06d8b36b206a'].resources.DISK_GB: 200 - name: put that allocation to new consumer PUT: /allocations/55555555-5555-5555-5555-555555555555 data: allocations: $RESPONSE['$.allocation_requests[0].allocations'] project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null status: 204 - name: put that allocation to existing consumer PUT: /allocations/22222222-2222-2222-2222-222222222222 data: allocations: $HISTORY['get candidates with shared'].$RESPONSE['$.allocation_requests[0].allocations'] project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] # we just happen to know this is supposed to be 1 here, so shortcutting consumer_generation: 1 status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-bug-1714072.yaml0000664000175000017500000000611200000000000031316 0ustar00zuulzuul00000000000000# Bug 1714072 describes a situation where a resource provider is present in the # body of an allocation, but the resources object is empty. There should be at # least one resource class and value pair. If there is not a 400 response # should be returned. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json # Default to <= 1.11 so the PUT /allocations in here that use the # older list-ish format continue to work. openstack-api-version: placement 1.11 tests: - name: create a resource provider POST: /resource_providers data: name: an rp status: 201 - name: get resource provider GET: $LOCATION status: 200 - name: add inventory to an rp PUT: $RESPONSE['$.links[?rel = "inventories"].href'] data: resource_provider_generation: 0 inventories: VCPU: total: 24 MEMORY_MB: total: 1024 status: 200 - name: put a successful allocation PUT: /allocations/c9f0186b-64f8-44fb-b6c9-83008d8d6940 data: allocations: - resource_provider: uuid: $HISTORY['get resource provider'].$RESPONSE['$.uuid'] resources: VCPU: 1 MEMORY_MB: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: fail with empty resources PUT: /allocations/c9f0186b-64f8-44fb-b6c9-83008d8d6940 data: allocations: - resource_provider: uuid: $HISTORY['get resource provider'].$RESPONSE['$.uuid'] resources: {} project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 400 response_strings: # jsonschema < 4.23.0 jsonschema >= 4.23.0 - "/(does not have enough properties)|(should be non-empty)/" # The next two tests confirm that the bug identified by # this file's name is not present in the PUT /allocations/{consumer_uuid} # format added by microversion 1.12. - name: put a successful dictish allocation PUT: /allocations/c9f0186b-64f8-44fb-b6c9-83008d8d6940 request_headers: openstack-api-version: placement 1.12 data: allocations: $HISTORY['get resource provider'].$RESPONSE['$.uuid']: resources: VCPU: 1 MEMORY_MB: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: fail with empty resources dictish PUT: /allocations/c9f0186b-64f8-44fb-b6c9-83008d8d6940 request_headers: openstack-api-version: placement 1.12 data: allocations: $HISTORY['get resource provider'].$RESPONSE['$.uuid']: resources: {} project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 400 response_strings: # jsonschema < 4.23.0 jsonschema >= 4.23.0 - "/(does not have enough properties)|(should be non-empty)/" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-bug-1778591.yaml0000664000175000017500000000424700000000000031345 0ustar00zuulzuul00000000000000# Demonstrate part of bug 1778591, where when creating an allocation for # a new consumer will create the consumer and its generation, but if it # fails the subsequent request requires generation 0, not null, which is # not what we expect. This is made more problematic in the we cannot query # the generation when the consumer has no allocations. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin # consumer generations were added in 1.28 openstack-api-version: placement 1.28 content-type: application/json accept: application/json tests: # create a simple resource provider with limited inventory - name: create provider POST: /resource_providers data: name: simple uuid: $ENVIRON['RP_UUID'] - name: set inventory PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories data: resource_provider_generation: 0 inventories: VCPU: total: 4 - name: fail allocations new consumer, bad capacity PUT: /allocations/88888888-8888-8888-8888-888888888888 data: allocations: "$ENVIRON['RP_UUID']": resources: VCPU: 9999 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null status: 409 response_strings: - The requested amount would exceed the capacity - name: try to get consumer generation desc: when there are no allocations we can't see the generation of a consumer GET: /allocations/88888888-8888-8888-8888-888888888888 response_json_paths: # check entire response $: allocations: {} # The failure to allocate above should have deleted the auto-created consumer, # so when we retry the allocation here, we should be able to use the # appropriate null generation to indicate this is a new consumer - name: retry allocations new consumer, still null gen PUT: /allocations/88888888-8888-8888-8888-888888888888 data: allocations: "$ENVIRON['RP_UUID']": resources: VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-bug-1778743.yaml0000664000175000017500000000414500000000000031341 0ustar00zuulzuul00000000000000# Test to see if capacity check in POST allocations works as expected. # It did not, due to bug 1778743, but it is now fixed. fixtures: - APIFixture defaults: request_headers: # 1.28 provides consumer generation in allocations openstack-api-version: placement 1.28 x-auth-token: admin content-type: application/json accept: application/json tests: - name: create an rp POST: /resource_providers data: uuid: 4e05a85b-e8a6-4b3a-82c1-5f6ad3f71d55 name: rp1 - name: add vcpu inventory PUT: /resource_providers/4e05a85b-e8a6-4b3a-82c1-5f6ad3f71d55/inventories data: resource_provider_generation: 0 inventories: VCPU: total: 2 - name: post multiple allocations desc: this should 409 because we're allocating 3 VCPU! POST: /allocations data: a6ace019-f230-4dcc-8a76-36d27b9c2257: allocations: 4e05a85b-e8a6-4b3a-82c1-5f6ad3f71d55: resources: VCPU: 1 project_id: a2cec092-0f67-42ed-b870-f3925cc5c6d4 user_id: d28385b2-7860-4055-b32d-4cd1057cd5f2 consumer_generation: null 2e613d4f-f5b2-4956-bd61-ea5be6600f80: allocations: 4e05a85b-e8a6-4b3a-82c1-5f6ad3f71d55: resources: VCPU: 1 project_id: a2cec092-0f67-42ed-b870-f3925cc5c6d4 user_id: d28385b2-7860-4055-b32d-4cd1057cd5f2 consumer_generation: null 2b3abca1-b72b-4817-9217-397f19b52c92: allocations: 4e05a85b-e8a6-4b3a-82c1-5f6ad3f71d55: resources: VCPU: 1 project_id: a2cec092-0f67-42ed-b870-f3925cc5c6d4 user_id: d28385b2-7860-4055-b32d-4cd1057cd5f2 consumer_generation: null status: 409 - name: check usage GET: /resource_providers/4e05a85b-e8a6-4b3a-82c1-5f6ad3f71d55/usages response_json_paths: $.usages.VCPU: 0 - name: check inventory GET: /resource_providers/4e05a85b-e8a6-4b3a-82c1-5f6ad3f71d55/inventories response_json_paths: $.inventories.VCPU.total: 2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-bug-1779717.yaml0000664000175000017500000000546200000000000031346 0ustar00zuulzuul00000000000000# Test that it's possible to change the project or user identifier for a # consumer by specifying a different project_id or user_id value in the payload # of both a PUT /allocations/{consumer_uuid} or POST /allocations fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement 1.28 tests: - name: create cn1 POST: /resource_providers data: name: cn1 status: 200 - name: add inventory PUT: $HISTORY['create cn1'].$RESPONSE['links[?rel = "inventories"].href'] data: resource_provider_generation: 0 inventories: VCPU: total: 16 MEMORY_MB: total: 2048 - name: create allocations for consumer1 PUT: /allocations/11111111-1111-1111-1111-111111111111 data: allocations: $HISTORY['create cn1'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null status: 204 - name: get allocations for consumer1 GET: /allocations/11111111-1111-1111-1111-111111111111 status: 200 response_json_paths: $.project_id: $ENVIRON['PROJECT_ID'] $.user_id: $ENVIRON['USER_ID'] - name: change the project for consumer1 PUT: /allocations/11111111-1111-1111-1111-111111111111 data: allocations: $HISTORY['create cn1'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 project_id: $ENVIRON['PROJECT_ID_ALT'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 status: 204 - name: check consumer1's project is now the other project GET: /allocations/11111111-1111-1111-1111-111111111111 status: 200 response_json_paths: $.project_id: $ENVIRON['PROJECT_ID_ALT'] $.user_id: $ENVIRON['USER_ID'] - name: create allocations for two consumers POST: /allocations data: 11111111-1111-1111-1111-111111111111: allocations: $HISTORY['create cn1'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 1 consumer_generation: 2 # Change consumer1's project back to the original PROJECT_ID project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] 22222222-2222-2222-2222-222222222222: allocations: $HISTORY['create cn1'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 1 consumer_generation: null project_id: $ENVIRON['PROJECT_ID_ALT'] user_id: $ENVIRON['USER_ID_ALT'] status: 204 - name: check consumer1's project is back to the original project GET: /allocations/11111111-1111-1111-1111-111111111111 status: 200 response_json_paths: $.project_id: $ENVIRON['PROJECT_ID'] $.user_id: $ENVIRON['USER_ID'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-legacy-rbac.yaml0000664000175000017500000002150200000000000032067 0ustar00zuulzuul00000000000000--- # Test the CRUD operations on /resource_providers/{uuid}/aggregates* using a # system administrator context. fixtures: - LegacyRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json # We need 1.36 here because 1.37 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.36 openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json # We need 1.36 here because 1.37 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.36 openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json # We need 1.36 here because 1.37 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.36 - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json # We need 1.36 here because 1.37 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.36 - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json # We need 1.36 here because 1.37 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.36 - &agg_1 f918801a-5e54-4bee-9095-09a9d0c786b8 - &agg_2 a893eb5c-e2a0-4251-ab26-f71d3b0cfc0b tests: - name: system admin can create resource provider POST: /resource_providers request_headers: *system_admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: system admin can set inventories PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *system_admin_headers data: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 min_unit: 10 max_unit: 1024 VCPU: total: 96 status: 200 - name: project admin can update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_admin_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: project admin can delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_admin_headers status: 204 - name: project member cannot update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_member_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 403 - name: project reader cannot update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_reader_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 403 - name: system reader cannot update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_reader_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 403 - name: system admin can update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_admin_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: system admin can list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_admin_headers - name: system reader cannot list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_reader_headers status: 403 - name: project admin can list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_admin_headers - name: project member cannot list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_member_headers status: 403 - name: project reader cannot list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_reader_headers status: 403 - name: system admin can list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *system_admin_headers - name: system reader cannot list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *system_reader_headers status: 403 - name: project admin can list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *project_admin_headers - name: project member cannot list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *project_member_headers status: 403 - name: project reader cannot list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *project_reader_headers status: 403 - name: system reader cannot manage allocations POST: /allocations request_headers: *system_reader_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 403 - name: project member cannot manage allocations POST: /allocations request_headers: *project_member_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 403 - name: project reader cannot manage allocations POST: /allocations request_headers: *project_reader_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 403 - name: project admin can manage allocations POST: /allocations request_headers: *project_admin_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 4 DISK_GB: 20 status: 204 - name: system admin can manage allocations POST: /allocations request_headers: *system_admin_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 2 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 204 - name: project member cannot delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_member_headers status: 403 - name: project reader cannot delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_reader_headers status: 403 - name: system reader cannot delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_reader_headers status: 403 - name: system admin can delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_admin_headers status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-mappings.yaml0000664000175000017500000001071400000000000031537 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Tests that allocation request mappings can be sent back fixtures: # See the layout diagram in this fixture's docstring in ../fixtures.py - NUMANetworkFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json # 1.34 is the microversion at which mappings are expected openstack-api-version: placement 1.34 tests: - name: mappings request GET: /allocation_candidates query_parameters: required_NET1: CUSTOM_PHYSNET1 resources_NET1: NET_BW_EGR_KILOBIT_PER_SEC:10 required_NET2: CUSTOM_PHYSNET2 resources_NET2: NET_BW_EGR_KILOBIT_PER_SEC:20 resources: VCPU:1 group_policy: isolate - name: put allocation with results PUT: /allocations/254eea13-27e1-4305-b35f-5dedd9f58ba0 data: allocations: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].allocations'] mappings: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].mappings'] consumer_generation: null user_id: 8c974f9a-f266-42f7-8613-a8017cbfb87F project_id: b2e599e0-ded8-47fd-b8ab-ceb7fca578bd status: 204 - name: put allocation wrong microversion PUT: /allocations/5662942e-497f-4a54-8257-dcbb3fa3e5f4 request_headers: openstack-api-version: placement 1.33 data: allocations: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].allocations'] mappings: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].mappings'] consumer_generation: null user_id: 8c974f9a-f266-42f7-8613-a8017cbfb87F project_id: b2e599e0-ded8-47fd-b8ab-ceb7fca578bd status: 400 response_json_paths: $.errors[0].detail: /Additional properties are not allowed/ - name: put allocation mapping bad form PUT: /allocations/5f9588de-079d-462a-a459-408524ab9b60 data: allocations: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].allocations'] mappings: alpha: beta consumer_generation: null user_id: 8c974f9a-f266-42f7-8613-a8017cbfb87F project_id: b2e599e0-ded8-47fd-b8ab-ceb7fca578bd status: 400 response_json_paths: $.errors[0].detail: "/JSON does not validate: 'beta' is not of type 'array'/" - name: put allocation mapping empty PUT: /allocations/5f9588de-079d-462a-a459-408524ab9b60 data: allocations: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].allocations'] mappings: {} consumer_generation: null user_id: 8c974f9a-f266-42f7-8613-a8017cbfb87F project_id: b2e599e0-ded8-47fd-b8ab-ceb7fca578bd status: 400 response_json_paths: # jsonschema < 4.23.0 jsonschema >= 4.23.0 $.errors[0].detail: "/JSON does not validate: {} (does not have enough properties)|(should be non-empty)/" - name: post allocation with results POST: /allocations data: '0b2c687e-89eb-47f6-bb68-2fc83e28032a': allocations: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].allocations'] mappings: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].mappings'] consumer_generation: null user_id: 8c974f9a-f266-42f7-8613-a8017cbfb87F project_id: b2e599e0-ded8-47fd-b8ab-ceb7fca578bd status: 204 - name: post allocation wrong microversion POST: /allocations request_headers: openstack-api-version: placement 1.33 data: '0b2c687e-89eb-47f6-bb68-2fc83e28032a': allocations: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].allocations'] mappings: $HISTORY['mappings request'].$RESPONSE['$.allocation_requests[0].mappings'] consumer_generation: null user_id: 8c974f9a-f266-42f7-8613-a8017cbfb87F project_id: b2e599e0-ded8-47fd-b8ab-ceb7fca578bd status: 400 response_json_paths: $.errors[0].detail: /Additional properties are not allowed/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-policy.yaml0000664000175000017500000000420700000000000031220 0ustar00zuulzuul00000000000000# This tests the individual CRUD operations on # /allocations* and /resource_providers/{uuid}/allocations using a non-admin # user with an open policy configuration. The response validation is # intentionally minimal. fixtures: - OpenPolicyFixture defaults: request_headers: x-auth-token: user accept: application/json content-type: application/json # We need 1.37 here because 1.38 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.37 tests: - name: create resource provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: set some inventory PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 min_unit: 10 max_unit: 1024 VCPU: total: 96 status: 200 - name: create allocation for consumer PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: list allocations for consumer GET: $LAST_URL - name: list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations - name: manage allocations POST: /allocations data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 204 - name: delete allocation for consumer DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-post.yaml0000664000175000017500000002775700000000000030725 0ustar00zuulzuul00000000000000# Test that it possible to POST multiple allocations to /allocations to # simultaneously make changes, including removing resources for a consumer if # the allocations are empty. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement 1.13 tests: - name: create compute one POST: /resource_providers data: name: compute01 status: 201 - name: rp compute01 desc: provide a reference for later reuse GET: $LOCATION - name: create compute two POST: /resource_providers data: name: compute02 status: 201 - name: rp compute02 desc: provide a reference for later reuse GET: $LOCATION - name: create shared disk POST: /resource_providers data: name: storage01 status: 201 - name: rp storage01 desc: provide a reference for later reuse GET: $LOCATION - name: inventory compute01 PUT: $HISTORY['rp compute01'].$RESPONSE['links[?rel = "inventories"].href'] data: resource_provider_generation: 0 inventories: VCPU: total: 16 MEMORY_MB: total: 2048 - name: inventory compute02 PUT: $HISTORY['rp compute02'].$RESPONSE['links[?rel = "inventories"].href'] data: resource_provider_generation: 0 inventories: VCPU: total: 16 MEMORY_MB: total: 2048 - name: inventory storage01 PUT: $HISTORY['rp storage01'].$RESPONSE['links[?rel = "inventories"].href'] data: resource_provider_generation: 0 inventories: DISK_GB: total: 4096 - name: confirm only POST GET: /allocations status: 405 response_headers: allow: POST - name: 404 on older 1.12 microversion post POST: /allocations request_headers: openstack-api-version: placement 1.12 status: 404 - name: post allocations two consumers POST: /allocations data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 5 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] $ENVIRON['MIGRATION_UUID']: allocations: $HISTORY['rp compute01'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 204 - name: get allocations for instance consumer GET: /allocations/$ENVIRON['INSTANCE_UUID'] request_headers: # We want to inspect the consumer generations... openstack-api-version: placement 1.28 response_json_paths: $.allocations["$HISTORY['rp compute02'].$RESPONSE['uuid']"].resources[MEMORY_MB]: 1024 $.allocations["$HISTORY['rp compute02'].$RESPONSE['uuid']"].resources[VCPU]: 2 $.allocations["$HISTORY['rp storage01'].$RESPONSE['uuid']"].resources[DISK_GB]: 5 $.consumer_generation: 1 $.project_id: $ENVIRON['PROJECT_ID'] $.user_id: $ENVIRON['USER_ID'] - name: get allocations for migration consumer GET: /allocations/$ENVIRON['MIGRATION_UUID'] request_headers: # We want to inspect the consumer generations... openstack-api-version: placement 1.28 response_json_paths: $.allocations["$HISTORY['rp compute01'].$RESPONSE['uuid']"].resources[MEMORY_MB]: 1024 $.allocations["$HISTORY['rp compute01'].$RESPONSE['uuid']"].resources[VCPU]: 2 $.consumer_generation: 1 $.project_id: $ENVIRON['PROJECT_ID'] $.user_id: $ENVIRON['USER_ID'] - name: confirm usages GET: /usages?project_id=$ENVIRON['PROJECT_ID'] response_json_paths: $.usages.DISK_GB: 5 $.usages.VCPU: 4 $.usages.MEMORY_MB: 2048 - name: clear and set allocations POST: /allocations data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 5 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] $ENVIRON['MIGRATION_UUID']: allocations: {} project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 204 - name: confirm usages after clear GET: /usages?project_id=$ENVIRON['PROJECT_ID'] response_json_paths: $.usages.DISK_GB: 5 $.usages.VCPU: 2 $.usages.MEMORY_MB: 1024 - name: post allocations two users POST: /allocations data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 5 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] # We must use a fresh consumer id with the alternate project id info. # A previously seen consumer id will be assumed to always have the same # project and user. $ENVIRON['CONSUMER_UUID']: allocations: $HISTORY['rp compute01'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 project_id: $ENVIRON['PROJECT_ID_ALT'] user_id: $ENVIRON['USER_ID_ALT'] status: 204 - name: confirm usages user a GET: /usages?project_id=$ENVIRON['PROJECT_ID'] response_json_paths: $.usages.`len`: 3 $.usages.DISK_GB: 5 $.usages.VCPU: 2 $.usages.MEMORY_MB: 1024 - name: confirm usages user b GET: /usages?project_id=$ENVIRON['PROJECT_ID_ALT'] response_json_paths: $.usages.`len`: 2 $.usages.VCPU: 2 $.usages.MEMORY_MB: 1024 - name: fail allocations over capacity POST: /allocations data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 5 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] $ENVIRON['CONSUMER_UUID']: allocations: $HISTORY['rp compute01'].$RESPONSE['uuid']: resources: MEMORY_MB: 2049 VCPU: 2 project_id: $ENVIRON['PROJECT_ID_ALT'] user_id: $ENVIRON['USER_ID_ALT'] status: 409 response_strings: - The requested amount would exceed the capacity - name: fail allocations deep schema violate desc: no schema yet POST: /allocations data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: cow: moo project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 400 - name: fail allocations shallow schema violate desc: no schema yet POST: /allocations data: $ENVIRON['INSTANCE_UUID']: cow: moo status: 400 - name: fail resource provider not exist POST: /allocations data: $ENVIRON['INSTANCE_UUID']: allocations: # this rp does not exist 'c42def7b-498b-4442-9502-c7970b14bea4': resources: MEMORY_MB: 1024 VCPU: 2 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 5 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 400 response_strings: - that does not exist - name: fail resource class not in inventory POST: /allocations data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 PCI_DEVICE: 1 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 5 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 409 response_strings: - "Inventory for 'PCI_DEVICE' on" - name: fail resource class not exist POST: /allocations data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 CUSTOM_PONY: 1 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 5 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 400 response_strings: - No such resource class CUSTOM_PONY - name: fail missing consumer generation >= 1.28 POST: /allocations request_headers: openstack-api-version: placement 1.28 data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 2 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 5 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] $ENVIRON['CONSUMER_UUID']: allocations: $HISTORY['rp compute01'].$RESPONSE['uuid']: resources: MEMORY_MB: 2049 VCPU: 2 project_id: $ENVIRON['PROJECT_ID_ALT'] user_id: $ENVIRON['USER_ID_ALT'] status: 400 response_strings: - JSON does not validate - name: fail incorrect consumer generation >= 1.28 POST: /allocations request_headers: openstack-api-version: placement 1.28 data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 1 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 4 consumer_generation: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] $ENVIRON['CONSUMER_UUID']: allocations: $HISTORY['rp compute01'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 1 consumer_generation: 1 project_id: $ENVIRON['PROJECT_ID_ALT'] user_id: $ENVIRON['USER_ID_ALT'] status: 409 response_strings: - consumer generation conflict - expected 3 but got 1 - name: change allocations for existing providers >= 1.28 POST: /allocations request_headers: openstack-api-version: placement 1.28 data: $ENVIRON['INSTANCE_UUID']: allocations: $HISTORY['rp compute02'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 1 $HISTORY['rp storage01'].$RESPONSE['uuid']: resources: DISK_GB: 4 consumer_generation: 3 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] $ENVIRON['CONSUMER_UUID']: allocations: $HISTORY['rp compute01'].$RESPONSE['uuid']: resources: MEMORY_MB: 1024 VCPU: 1 consumer_generation: 1 project_id: $ENVIRON['PROJECT_ID_ALT'] user_id: $ENVIRON['USER_ID_ALT'] status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations-secure-rbac.yaml0000664000175000017500000003172200000000000032116 0ustar00zuulzuul00000000000000--- # Test the CRUD operations on /resource_providers/{uuid}/aggregates* using a # system administrator context. fixtures: - SecureRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &admin_project_id $ENVIRON['ADMIN_PROJECT_ID'] - &service_project_id $ENVIRON['SERVICE_PROJECT_ID'] - &admin_headers x-auth-token: user x-roles: admin x-project-id: admin_project_id accept: application/json content-type: application/json # We need 1.37 here because 1.38 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.37 - &service_headers x-auth-token: user x-roles: service x-project-id: service_project_id accept: application/json content-type: application/json # We need 1.37 here because 1.38 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.37 - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json # We need 1.37 here because 1.38 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.37 openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json # We need 1.37 here because 1.38 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.37 openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json # We need 1.37 here because 1.38 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.37 - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json # We need 1.37 here because 1.38 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.37 - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json # We need 1.37 here because 1.38 required consumer_type which these # allocations do not have. openstack-api-version: placement 1.37 - &agg_1 f918801a-5e54-4bee-9095-09a9d0c786b8 - &agg_2 a893eb5c-e2a0-4251-ab26-f71d3b0cfc0b tests: - name: admin can create resource provider POST: /resource_providers request_headers: *admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: service can create resource providers POST: /resource_providers request_headers: *service_headers data: name: $ENVIRON['RP_NAME1'] uuid: $ENVIRON['RP_UUID1'] status: 200 - name: project admin can create resource providers POST: /resource_providers request_headers: *project_admin_headers data: name: $ENVIRON['RP_NAME2'] uuid: $ENVIRON['RP_UUID2'] status: 200 - name: project admin can set inventories PUT: /resource_providers/$ENVIRON['RP_UUID2']/inventories request_headers: *project_admin_headers data: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 min_unit: 10 max_unit: 1024 VCPU: total: 96 status: 200 - name: service can set inventories PUT: /resource_providers/$ENVIRON['RP_UUID1']/inventories request_headers: *service_headers data: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 min_unit: 10 max_unit: 1024 VCPU: total: 96 status: 200 - name: admin can set inventories PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *admin_headers data: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 min_unit: 10 max_unit: 1024 VCPU: total: 96 status: 200 - name: admin can update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *admin_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: service can update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *service_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: project admin can update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_admin_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: 2 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: project member cannot update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_member_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 403 - name: project reader cannot update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_reader_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 403 - name: system reader cannot update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_reader_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 403 - name: system admin cannot update allocation PUT: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_admin_headers data: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 DISK_GB: 20 consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 403 - name: admin can list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *admin_headers - name: service can list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *service_headers - name: system admin cannot list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_admin_headers status: 403 - name: system reader cannot list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_reader_headers status: 403 - name: project admin cannot list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_admin_headers - name: project member cannot list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_member_headers status: 403 - name: project reader cannot list allocation GET: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_reader_headers status: 403 - name: admin can list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *admin_headers - name: service can list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *service_headers - name: system admin cannot list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *system_admin_headers status: 403 - name: system reader cannot list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *system_reader_headers status: 403 - name: project admin can list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *project_admin_headers - name: project member cannot list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *project_member_headers status: 403 - name: project reader cannot list allocations for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations request_headers: *project_reader_headers status: 403 - name: system reader cannot manage allocations POST: /allocations request_headers: *system_reader_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 403 - name: project admin can manage allocations POST: /allocations request_headers: *project_admin_headers data: b0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID2']: resources: VCPU: 8 DISK_GB: 40 status: 204 - name: project member cannot manage allocations POST: /allocations request_headers: *project_member_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 403 - name: project reader cannot manage allocations POST: /allocations request_headers: *project_reader_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 403 - name: system admin cannot manage allocations POST: /allocations request_headers: *system_admin_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 403 - name: admin can manage allocations POST: /allocations request_headers: *admin_headers data: a0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: 3 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 DISK_GB: 40 status: 204 - name: service can manage allocations POST: /allocations request_headers: *service_headers data: c0b15655-273a-4b3d-9792-2e579b7d5ad9: consumer_generation: null project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 allocations: $ENVIRON['RP_UUID1']: resources: VCPU: 8 DISK_GB: 40 status: 204 - name: project admin can delete allocations DELETE: /allocations/b0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_admin_headers status: 204 - name: project member cannot delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_member_headers status: 403 - name: project reader cannot delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *project_reader_headers status: 403 - name: system reader cannot delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_reader_headers status: 403 - name: system admin cannot delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *system_admin_headers status: 403 - name: admin can delete allocations DELETE: /allocations/a0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *admin_headers status: 204 - name: service can delete allocations DELETE: /allocations/c0b15655-273a-4b3d-9792-2e579b7d5ad9 request_headers: *service_headers status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/allocations.yaml0000664000175000017500000003670000000000000027726 0ustar00zuulzuul00000000000000# Tests of allocations API # # Note(cdent): Consumer ids are not validated against anything to # confirm that they are associated with anything real. This is # by design. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json tests: - name: get allocations no consumer is 405 GET: /allocations status: 405 response_json_paths: $.errors[0].title: Method Not Allowed - name: get allocations is empty dict GET: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 response_json_paths: $.allocations: {} - name: put an allocation no resource provider PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resources: DISK_GB: 10 status: 400 response_json_paths: $.errors[0].title: Bad Request - name: create the resource provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 201 - name: put an allocation no data PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json status: 400 response_json_paths: $.errors[0].title: Bad Request - name: put an allocation empty list PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: [] status: 400 response_strings: - "Failed validating 'minItems'" - name: put an allocation violate schema PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: cow: 10 status: 400 response_json_paths: $.errors[0].title: Bad Request - name: put an allocation no inventory PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 status: 409 response_json_paths: $.errors[0].title: Conflict - name: post some inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 2048 min_unit: 10 max_unit: 1024 status: 201 - name: put an allocation with zero usage PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 0 status: 400 response_strings: - "JSON does not validate: 0 is less than the minimum of 1" - Failed validating 'minimum' in schema - name: put an allocation with omitted usage PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] status: 400 response_strings: - Failed validating 'required' in schema - name: put an allocation PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 status: 204 - name: fail to delete that provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json # we need this microversion to get error codes in the response openstack-api-version: placement 1.23 status: 409 response_strings: - "Unable to delete resource provider $ENVIRON['RP_UUID']" response_json_paths: errors[0].code: placement.resource_provider.inuse - name: put an allocation different consumer PUT: /allocations/39715579-2167-4c63-8247-301311cc6703 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 status: 204 - name: check usages after another 10 GET: /resource_providers/$ENVIRON['RP_UUID']/usages response_json_paths: $.usages.DISK_GB: 20 # NOTE(cdent): Contravening the spec, we decided that it is # important to be able to update an existing allocation, so this # should work but it is important to check the usage. - name: put allocation again PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 12 status: 204 - name: check usages after 12 GET: /resource_providers/$ENVIRON['RP_UUID']/usages response_json_paths: $.usages.DISK_GB: 22 - name: put allocation bad resource class PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: COWS: 12 status: 400 response_strings: - Unable to allocate inventory for consumer - No such resource class COWS response_json_paths: $.errors[0].title: Bad Request - name: delete allocation DELETE: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 status: 204 - name: delete allocation again DELETE: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 status: 404 response_strings: - No allocations for consumer '599ffd2d-526a-4b2e-8683-f13ad25f9958' response_json_paths: $.errors[0].title: Not Found - name: delete allocation of unknown consumer id DELETE: /allocations/da78521f-bf7e-4e6e-9901-3f79bd94d55d status: 404 response_json_paths: $.errors[0].title: Not Found - name: redo an allocation PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 status: 204 - name: add other inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: VCPU total: 32 min_unit: 1 max_unit: 8 status: 201 - name: multiple allocations PUT: /allocations/833f0885-f78c-4788-bb2b-3607b0656be7 request_headers: content-type: application/json data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 20 VCPU: 4 status: 204 - name: check usages GET: /resource_providers/$ENVIRON['RP_UUID']/usages response_json_paths: $.resource_provider_generation: 7 $.usages.DISK_GB: 40 - name: check allocations for the resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/allocations response_json_paths: $.resource_provider_generation: 7 # allocations are keyed by consumer id, jsonpath-rw needs us # to quote the uuids or its parser gets confused that maybe # they are numbers on which math needs to be done. $.allocations['833f0885-f78c-4788-bb2b-3607b0656be7'].resources.DISK_GB: 20 $.allocations['833f0885-f78c-4788-bb2b-3607b0656be7'].resources.VCPU: 4 $.allocations['599ffd2d-526a-4b2e-8683-f13ad25f9958'].resources.DISK_GB: 10 $.allocations['39715579-2167-4c63-8247-301311cc6703'].resources.DISK_GB: 10 - name: confirm 404 for allocations of bad resource provider GET: /resource_providers/cb8a3007-b93a-471f-9e1f-4d58355678bd/allocations status: 404 response_json_paths: $.errors[0].title: Not Found - name: check allocations by consumer id GET: /allocations/833f0885-f78c-4788-bb2b-3607b0656be7 response_json_paths: $.allocations["$ENVIRON['RP_UUID']"].generation: 7 $.allocations["$ENVIRON['RP_UUID']"].resources.DISK_GB: 20 $.allocations["$ENVIRON['RP_UUID']"].resources.VCPU: 4 - name: check allocations by different consumer id GET: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958 response_json_paths: $.allocations["$ENVIRON['RP_UUID']"].generation: 7 $.allocations["$ENVIRON['RP_UUID']"].resources.DISK_GB: 10 # create another two resource providers to test retrieving # allocations - name: create resource provider 1 POST: /resource_providers request_headers: content-type: application/json data: name: rp1 uuid: 9229b2fc-d556-4e38-9c18-443e4bc6ceae status: 201 - name: create resource provider 2 POST: /resource_providers request_headers: content-type: application/json data: name: rp2 uuid: fcfa516a-abbe-45d1-8152-d5225d82e596 status: 201 - name: set inventory on rp1 PUT: /resource_providers/9229b2fc-d556-4e38-9c18-443e4bc6ceae/inventories request_headers: content-type: application/json data: resource_provider_generation: 0 inventories: VCPU: total: 32 max_unit: 32 DISK_GB: total: 10 max_unit: 10 - name: set inventory on rp2 PUT: /resource_providers/fcfa516a-abbe-45d1-8152-d5225d82e596/inventories request_headers: content-type: application/json data: resource_provider_generation: 0 inventories: VCPU: total: 16 max_unit: 16 DISK_GB: total: 20 max_unit: 20 status: 200 - name: put allocations on both those providers one PUT: /allocations/1835b1c9-1c61-45af-9eb3-3e0e9f29487b request_headers: content-type: application/json data: allocations: - resource_provider: uuid: fcfa516a-abbe-45d1-8152-d5225d82e596 resources: DISK_GB: 10 VCPU: 8 - resource_provider: uuid: 9229b2fc-d556-4e38-9c18-443e4bc6ceae resources: DISK_GB: 5 VCPU: 16 status: 204 - name: put allocations on both those providers two PUT: /allocations/75d0f5f7-75d9-458c-b204-f90ac91604ec request_headers: content-type: application/json data: allocations: - resource_provider: uuid: fcfa516a-abbe-45d1-8152-d5225d82e596 resources: DISK_GB: 5 VCPU: 4 - resource_provider: uuid: 9229b2fc-d556-4e38-9c18-443e4bc6ceae resources: DISK_GB: 2 VCPU: 8 status: 204 # These headers should not be present in any microversion on PUT # because there is no response body. response_forbidden_headers: - cache-control - last-modified - name: get those allocations for consumer GET: /allocations/1835b1c9-1c61-45af-9eb3-3e0e9f29487b response_json_paths: $.allocations.['fcfa516a-abbe-45d1-8152-d5225d82e596'].generation: 3 $.allocations.['fcfa516a-abbe-45d1-8152-d5225d82e596'].resources.DISK_GB: 10 $.allocations.['fcfa516a-abbe-45d1-8152-d5225d82e596'].resources.VCPU: 8 $.allocations.['9229b2fc-d556-4e38-9c18-443e4bc6ceae'].generation: 3 $.allocations.['9229b2fc-d556-4e38-9c18-443e4bc6ceae'].resources.DISK_GB: 5 $.allocations.['9229b2fc-d556-4e38-9c18-443e4bc6ceae'].resources.VCPU: 16 - name: get those allocations for resource provider GET: /resource_providers/fcfa516a-abbe-45d1-8152-d5225d82e596/allocations response_json_paths: $.resource_provider_generation: 3 $.allocations.['75d0f5f7-75d9-458c-b204-f90ac91604ec'].resources.DISK_GB: 5 $.allocations.['75d0f5f7-75d9-458c-b204-f90ac91604ec'].resources.VCPU: 4 $.allocations.['1835b1c9-1c61-45af-9eb3-3e0e9f29487b'].resources.DISK_GB: 10 $.allocations.['1835b1c9-1c61-45af-9eb3-3e0e9f29487b'].resources.VCPU: 8 - name: put allocations on existing consumer with dashless UUID PUT: /allocations/75d0f5f775d9458cb204f90ac91604ec request_headers: content-type: application/json # Consumer generation openstack-api-version: placement 1.28 data: allocations: fcfa516a-abbe-45d1-8152-d5225d82e596: resources: DISK_GB: 1 VCPU: 1 9229b2fc-d556-4e38-9c18-443e4bc6ceae: resources: DISK_GB: 1 VCPU: 1 consumer_generation: 1 project_id: 00000000-0000-0000-0000-000000000000 user_id: 00000000-0000-0000-0000-000000000000 status: 204 - name: get allocations on existing consumer with dashed UUID GET: /allocations/75d0f5f7-75d9-458c-b204-f90ac91604ec response_json_paths: $.allocations.['fcfa516a-abbe-45d1-8152-d5225d82e596'].generation: 4 $.allocations.['fcfa516a-abbe-45d1-8152-d5225d82e596'].resources.DISK_GB: 1 $.allocations.['fcfa516a-abbe-45d1-8152-d5225d82e596'].resources.VCPU: 1 $.allocations.['9229b2fc-d556-4e38-9c18-443e4bc6ceae'].generation: 4 $.allocations.['9229b2fc-d556-4e38-9c18-443e4bc6ceae'].resources.DISK_GB: 1 $.allocations.['9229b2fc-d556-4e38-9c18-443e4bc6ceae'].resources.VCPU: 1 - name: put an allocation for a not existing resource provider PUT: /allocations/75d0f5f7-75d9-458c-b204-f90ac91604ec request_headers: content-type: application/json data: allocations: - resource_provider: uuid: be8b9cba-e7db-4a12-a386-99b4242167fe resources: DISK_GB: 5 VCPU: 4 status: 400 response_strings: - Allocation for resource provider 'be8b9cba-e7db-4a12-a386-99b4242167fe' that does not exist response_json_paths: $.errors[0].title: Bad Request - name: get allocations for resource provider with cache headers 1.15 GET: /resource_providers/fcfa516a-abbe-45d1-8152-d5225d82e596/allocations request_headers: openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: get allocations for resource provider without cache headers 1.14 GET: /resource_providers/fcfa516a-abbe-45d1-8152-d5225d82e596/allocations request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - cache-control - last-modified - name: get allocations for consumer with cache headers 1.15 GET: /allocations/1835b1c9-1c61-45af-9eb3-3e0e9f29487b request_headers: openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: get allocations for consumer without cache headers 1.14 GET: /allocations/1835b1c9-1c61-45af-9eb3-3e0e9f29487b request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - cache-control - last-modified - name: creating allocation with a non UUID consumer fails PUT: /allocations/not-a-uuid request_headers: content-type: application/json data: allocations: - resource_provider: uuid: fcfa516a-abbe-45d1-8152-d5225d82e596 resources: DISK_GB: 1 VCPU: 1 status: 400 response_strings: - Malformed consumer_uuid ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/basic-http.yaml0000664000175000017500000001275000000000000027453 0ustar00zuulzuul00000000000000# # Test the basic handling of HTTP (expected response codes and the # like). # fixtures: - APIFixture defaults: request_headers: # NOTE(cdent): Get past keystone, even though at this stage # we don't require auth. x-auth-token: admin accept: application/json tests: - name: 404 at no service GET: /barnabas status: 404 response_json_paths: $.errors[0].title: Not Found - name: error message has request id GET: /barnabas status: 404 response_json_paths: $.errors[0].request_id: /req-[a-fA-F0-9-]+/ - name: error message has default code 1.23 GET: /barnabas status: 404 request_headers: openstack-api-version: placement 1.23 response_json_paths: $.errors[0].code: placement.undefined_code - name: 404 at no resource provider GET: /resource_providers/fd0dd55c-6330-463b-876c-31c54e95cb95 status: 404 - name: 405 on bad method at root DELETE: / status: 405 response_headers: allow: GET response_json_paths: $.errors[0].title: Method Not Allowed - name: 200 at home GET: / status: 200 - name: 405 on bad method on app DELETE: /resource_providers status: 405 response_headers: allow: /(GET|POST), (POST|GET)/ response_json_paths: $.errors[0].title: Method Not Allowed response_strings: - The method DELETE is not allowed for this resource. - name: 405 on bad options method on app OPTIONS: /resource_providers status: 405 response_headers: allow: /(GET|POST), (POST|GET)/ response_json_paths: $.errors[0].title: Method Not Allowed response_strings: - The method OPTIONS is not allowed for this resource. - name: bad accept resource providers GET: /resource_providers request_headers: accept: text/plain status: 406 - name: complex accept resource providers GET: /resource_providers request_headers: accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 status: 200 response_json_paths: $.resource_providers: [] - name: post resource provider wrong content-type POST: /resource_providers request_headers: content-type: text/plain data: I want a resource provider please status: 415 - name: post resource provider missing content-type desc: because content-length is set, we should have a content-type POST: /resource_providers data: I want a resource provider please status: 400 response_strings: - content-type header required # NOTE(cdent): This is an awkward test. It is not actually testing a # PUT of a resource provider. It is confirming that a PUT with no # body, no content-length header and no content-type header will # reach the desired handler. - name: PUT resource provider no body desc: different response string from prior test indicates past content-length requirement PUT: /resource_providers/d3a64825-8228-4ccb-8a6c-1c6d3eb6a3e8 status: 415 response_strings: - The media type None is not supported, use application/json - name: post resource provider schema mismatch POST: /resource_providers request_headers: content-type: application/json data: transport: car color: blue status: 400 - name: post good resource provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 201 - name: get resource provider wrong accept GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: accept: text/plain status: 406 response_strings: - Only application/json is provided - name: get resource provider complex accept wild match desc: like a browser, */* should match GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: get resource provider complex accept no match desc: no */*, no match GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: accept: text/html,application/xhtml+xml,application/xml;q=0.9 status: 406 - name: put poor format resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: text/plain data: Why U no provide? status: 415 - name: non inventory sub resource provider path GET: /resource_providers/7850178f-1807-4512-b135-0b174985405b/cows request_headers: accept: application/json status: 404 response_json_paths: $.errors[0].title: Not Found response_strings: - The resource could not be found. - name: root at 1.15 has cache headers GET: / request_headers: openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: root at 1.14 no cache headers GET: / request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - last-modified - cache-control - name: test starred accept and errors GET: /resource_providers/foo request_headers: accept: "*/*" status: 404 response_headers: content-type: application/json response_json_paths: $.errors[0].title: Not Found - name: bad content length not int POST: /resource_providers request_headers: content-type: application/json content-length: hi mom data: uuid: ce13d7f1-9988-4dfd-8e16-ce071802eb36 status: 400 response_strings: - content-length header must be an integer ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/bug-1674694.yaml0000664000175000017500000000153200000000000027030 0ustar00zuulzuul00000000000000# Test launchpad bug https://bugs.launchpad.net/nova/+bug/1674694 fixtures: - APIFixture defaults: request_headers: x-auth-token: admin tests: - name: 404 with application/json GET: /bc8d9d50-7b0d-45ef-839c-e7b5e1c4e8fd request_headers: accept: application/json status: 404 response_headers: content-type: application/json response_json_paths: $.errors[0].status: 404 - name: 404 with no accept GET: /bc8d9d50-7b0d-45ef-839c-e7b5e1c4e8fd status: 404 response_headers: content-type: application/json response_json_paths: $.errors[0].status: 404 - name: 404 with other accept GET: /bc8d9d50-7b0d-45ef-839c-e7b5e1c4e8fd status: 404 request_headers: accept: text/html response_headers: content-type: /text/html/ response_strings: - The resource could not be found ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/confirm-auth.yaml0000664000175000017500000000113100000000000030000 0ustar00zuulzuul00000000000000# # Confirm that the noauth handler is causing a 401 when no fake # token is provided. # fixtures: - APIFixture defaults: request_headers: accept: application/json tests: - name: no token gets 200 at root GET: / status: 200 - name: with token 200 at root GET: / request_headers: x-auth-token: admin:admin status: 200 - name: no token gets 401 GET: /resource_providers status: 401 - name: with token 200 GET: /resource_providers request_headers: x-auth-token: admin:admin status: 200 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/consumer-types-1.38.yaml0000664000175000017500000001672600000000000031010 0ustar00zuulzuul00000000000000# Test consumer types work as designed. fixtures: - AllocationFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement 1.38 tests: - name: 400 on no consumer type post POST: /allocations data: f5a91a0a-e111-4a9c-8a33-7b320ae1e52a: consumer_generation: null project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 status: 400 response_strings: - "'consumer_type' is a required property" - name: 400 on no consumer type put PUT: /allocations/f5a91a0a-e111-4a9c-8a33-7b320ae1e52a data: consumer_generation: null project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 status: 400 response_strings: - "'consumer_type' is a required property" - name: consumer type post POST: /allocations data: f5a91a0a-e111-4a9c-8a33-7b320ae1e52a: consumer_type: INSTANCE consumer_generation: null project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 status: 204 - name: consumer type put PUT: /allocations/f5a91a0a-e111-4a9c-8a33-7b320ae1e52a data: consumer_generation: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_type: PONY allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 status: 204 - name: consumer put without type PUT: /allocations/4fa4553e-e739-4f0b-a758-2fa79fda2ee0 request_headers: openstack-api-version: placement 1.36 data: consumer_generation: null project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 status: 204 - name: reset to new type PUT: /allocations/4fa4553e-e739-4f0b-a758-2fa79fda2ee0 data: consumer_generation: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_type: INSTANCE allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 status: 204 - name: malformed consumer type put PUT: /allocations/4fa4553e-e739-4f0b-a758-2fa79fda2ee0 data: consumer_generation: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_type: instance allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 status: 400 response_strings: - "'instance' does not match '^[A-Z0-9_]+$'" - name: malformed consumer type post POST: /allocations data: 4fa4553e-e739-4f0b-a758-2fa79fda2ee0: consumer_generation: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_type: instance allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 status: 400 response_strings: - "'instance' does not match '^[A-Z0-9_]+$'" # check usages, some allocations are pre-provided by the fixture - name: usages include consumer_type GET: /usages?project_id=$ENVIRON['PROJECT_ID'] response_json_paths: $.usages.PONY: consumer_count: 1 DISK_GB: 10 $.usages.INSTANCE: consumer_count: 1 DISK_GB: 10 $.usages.unknown: consumer_count: 3 DISK_GB: 1020 VCPU: 7 - name: limit usages by consumer_type GET: /usages?project_id=$ENVIRON['PROJECT_ID']&consumer_type=PONY response_json_paths: $.usages.`len`: 1 $.usages.PONY: consumer_count: 1 DISK_GB: 10 - name: limit usages bad consumer_type GET: /usages?project_id=$ENVIRON['PROJECT_ID']&consumer_type=COW response_json_paths: $.usages.`len`: 0 - name: limit usages by all GET: /usages?project_id=$ENVIRON['PROJECT_ID']&consumer_type=all response_json_paths: $.usages.`len`: 1 $.usages.all: consumer_count: 5 DISK_GB: 1040 VCPU: 7 - name: ALL is not all GET: /usages?project_id=$ENVIRON['PROJECT_ID']&consumer_type=ALL response_json_paths: $.usages.`len`: 0 - name: limit usages by unknown GET: /usages?project_id=$ENVIRON['PROJECT_ID']&consumer_type=unknown response_json_paths: $.usages.`len`: 1 $.usages.unknown: consumer_count: 3 DISK_GB: 1020 VCPU: 7 - name: UNKNOWN is not unknown GET: /usages?project_id=$ENVIRON['PROJECT_ID']&consumer_type=UNKNOWN response_json_paths: $.usages.`len`: 0 - name: reshaper accepts consumer type POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: # It's 9 because of the previous work resource_provider_generation: 9 inventories: DISK_GB: total: 2048 VCPU: total: 97 allocations: 4b01cd5a-9e12-46d7-9b2a-5bc0f6040a40: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null consumer_type: RESHAPED status: 204 - name: confirm reshaped allocations GET: /allocations/4b01cd5a-9e12-46d7-9b2a-5bc0f6040a40 response_json_paths: $.consumer_type: RESHAPED - name: reshaper requires consumer type POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: # It's 9 because of the previous work resource_provider_generation: 9 inventories: DISK_GB: total: 2048 VCPU: total: 97 allocations: 4b01cd5a-9e12-46d7-9b2a-5bc0f6040a40: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 status: 400 response_strings: - "'consumer_type' is a required" - name: reshaper refuses consumer type earlier microversion request_headers: openstack-api-version: placement 1.36 POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: # It's 9 because of the previous work resource_provider_generation: 9 inventories: DISK_GB: total: 2048 VCPU: total: 97 allocations: 4b01cd5a-9e12-46d7-9b2a-5bc0f6040a40: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 consumer_type: RESHAPED status: 400 response_strings: - "JSON does not validate: Additional properties are not allowed" - "'consumer_type' was unexpected" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/consumer-types-bug-story-2009167.yaml0000664000175000017500000000167300000000000033173 0ustar00zuulzuul00000000000000fixtures: - AllocationFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json tests: - name: put an allocation with older than 1.38 so no consumer_type is provided PUT: /allocations/44444444-4444-4444-4444-444444444444 request_headers: openstack-api-version: placement 1.37 data: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 10 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null status: 204 - name: get allocation with 1.38 expected "unknown" consumer_type GET: /allocations/44444444-4444-4444-4444-444444444444 request_headers: openstack-api-version: placement 1.38 response_json_paths: $.allocations.`len`: 1 $.allocations['$ENVIRON["RP_UUID"]'].resources.DISK_GB: 10 $.consumer_type: unknown status: 200 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/cors.yaml0000664000175000017500000000246700000000000026367 0ustar00zuulzuul00000000000000# Confirm that CORS is present. No complex configuration is done so # this just tests the basics. Borrowed, in spirit, from # nova.tests.functional.test_middleware. fixtures: - CORSFixture defaults: request_headers: x-auth-token: user tests: - name: valid options request OPTIONS: / request_headers: origin: http://valid.example.com access-control-request-method: GET access-control-request-headers: openstack-api-version status: 200 response_headers: access-control-allow-origin: http://valid.example.com # Confirm allow-headers configuration. access-control-allow-headers: openstack-api-version - name: invalid options request OPTIONS: / request_headers: origin: http://invalid.example.com access-control-request-method: GET status: 200 response_forbidden_headers: - access-control-allow-origin - name: valid get request GET: / request_headers: origin: http://valid.example.com access-control-request-method: GET status: 200 response_headers: access-control-allow-origin: http://valid.example.com - name: invalid get request GET: / request_headers: origin: http://invalid.example.com access-control-request-method: GET status: 200 response_forbidden_headers: - access-control-allow-origin ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/ensure-consumer.yaml0000664000175000017500000000255700000000000030553 0ustar00zuulzuul00000000000000# Tests of the ensure consumer behaviour for versions of the API before 1.8; # starting with 1.8, project_id and user_id are required by the # PUT: /allocations/{consumer_uuid} API. fixtures: - AllocationFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement 1.7 vars: - &default_incomplete_id 00000000-0000-0000-0000-000000000000 tests: - name: put an allocation without project/user (1.7) PUT: /allocations/$ENVIRON['CONSUMER_UUID'] request_headers: content-type: application/json openstack-api-version: placement 1.7 data: allocations: - resource_provider: uuid: $ENVIRON['RP_UUID'] resources: DISK_GB: 10 status: 204 # We now ALWAYS create a consumer record, and if project or user isn't # specified (as was the case in 1.7) we should get the project/user # corresponding to the CONF option for incomplete consumers when asking for the # allocation information at a microversion that shows project/user information # (1.12+) - name: get with 1.12 microversion and check project and user are filled GET: /allocations/$ENVIRON['CONSUMER_UUID'] request_headers: openstack-api-version: placement 1.12 response_json_paths: $.project_id: *default_incomplete_id $.user_id: *default_incomplete_id ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/granular-same-subtree.yaml0000664000175000017500000005632000000000000031623 0ustar00zuulzuul00000000000000# Tests of /allocation_candidates API with same_subtree. fixtures: - NUMANetworkFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json # version of request in which `same_subtree` is supported openstack-api-version: placement 1.36 tests: - name: resourceless traits without same_subtree GET: /allocation_candidates query_parameters: resources1: VCPU:1 required2: COMPUTE_VOLUME_MULTI_ATTACH group_policy: none status: 400 response_strings: - "Resourceless suffixed group request should be specified in `same_subtree` query param" response_json_paths: $.errors[0].title: Bad Request $.errors[0].code: placement.query.bad_value - name: resourceless aggs without same_subtree GET: /allocation_candidates query_parameters: resources1: VCPU:1 member_of2: $ENVIRON['AGGA_UUID'] group_policy: none status: 400 response_strings: - "Resourceless suffixed group request should be specified in `same_subtree` query param" response_json_paths: $.errors[0].title: Bad Request $.errors[0].code: placement.query.bad_value - name: resourceless without any resource GET: /allocation_candidates?&member_of1=$ENVIRON['AGGA_UUID']&group_policy=none query_parameters: member_of1: $ENVIRON['AGGA_UUID'] group_policy: none status: 400 response_strings: - 'There must be at least one resources or resources[$S] parameter.' response_json_paths: $.errors[0].title: Bad Request $.errors[0].code: placement.query.missing_value - name: invalid same subtree missing underscores GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 resources_ACCEL: CUSTOM_FPGA:1 same_subtree: COMPUTE,_ACCEL group_policy: none status: 400 response_strings: - "Real suffixes should be specified in `same_subtree`:" response_json_paths: $.errors[0].title: Bad Request $.errors[0].code: placement.query.bad_value - name: invalid same subtree with empty suffix GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 resources_ACCEL: CUSTOM_FPGA:1 same_subtree: _COMPUTE,,_ACCEL group_policy: none status: 400 response_strings: - 'Empty string (unsuffixed group) can not be specified in `same_subtree`' response_json_paths: $.errors[0].title: Bad Request $.errors[0].code: placement.query.bad_value - name: no resourceless without same subtree GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 resources_ACCEL: CUSTOM_FPGA:1 group_policy: none response_json_paths: $.allocation_requests.`len`: 6 $.allocation_requests..allocations['$ENVIRON["NUMA0_UUID"]'].resources.VCPU: [1, 1, 1] $.allocation_requests..allocations['$ENVIRON["NUMA1_UUID"]'].resources.VCPU: [1, 1, 1] $.allocation_requests..allocations['$ENVIRON["FPGA0_UUID"]'].resources.CUSTOM_FPGA: [1, 1] $.allocation_requests..allocations['$ENVIRON["FPGA1_0_UUID"]'].resources.CUSTOM_FPGA: [1, 1] $.allocation_requests..allocations['$ENVIRON["FPGA1_1_UUID"]'].resources.CUSTOM_FPGA: [1, 1] - name: no resourceless with single same subtree GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 resources_ACCEL: CUSTOM_FPGA:1 same_subtree: _COMPUTE group_policy: none response_json_paths: $.allocation_requests.`len`: 6 $.allocation_requests..allocations['$ENVIRON["NUMA0_UUID"]'].resources.VCPU: [1, 1, 1] $.allocation_requests..allocations['$ENVIRON["NUMA1_UUID"]'].resources.VCPU: [1, 1, 1] $.allocation_requests..allocations['$ENVIRON["FPGA0_UUID"]'].resources.CUSTOM_FPGA: [1, 1] $.allocation_requests..allocations['$ENVIRON["FPGA1_0_UUID"]'].resources.CUSTOM_FPGA: [1, 1] $.allocation_requests..allocations['$ENVIRON["FPGA1_1_UUID"]'].resources.CUSTOM_FPGA: [1, 1] - name: no resourceless with same subtree GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 resources_ACCEL: CUSTOM_FPGA:1 same_subtree: _COMPUTE,_ACCEL group_policy: none response_json_paths: $.allocation_requests.`len`: 3 $.allocation_requests..allocations['$ENVIRON["NUMA0_UUID"]'].resources.VCPU: 1 $.allocation_requests..allocations['$ENVIRON["NUMA1_UUID"]'].resources.VCPU: [1, 1] $.allocation_requests..allocations['$ENVIRON["FPGA0_UUID"]'].resources.CUSTOM_FPGA: 1 $.allocation_requests..allocations['$ENVIRON["FPGA1_0_UUID"]'].resources.CUSTOM_FPGA: 1 $.allocation_requests..allocations['$ENVIRON["FPGA1_1_UUID"]'].resources.CUSTOM_FPGA: 1 - name: no resourceless with same subtree same provider # Ensure that "myself" is in the same subtree GET: /allocation_candidates query_parameters: resources_COMPUTE1: VCPU:1 resources_COMPUTE2: MEMORY_MB:1024 same_subtree: _COMPUTE1,_COMPUTE2 group_policy: none response_json_paths: $.allocation_requests.`len`: 3 $.allocation_requests..allocations['$ENVIRON["NUMA0_UUID"]'].resources.VCPU: 1 $.allocation_requests..allocations['$ENVIRON["NUMA0_UUID"]'].resources.MEMORY_MB: 1024 $.allocation_requests..allocations['$ENVIRON["NUMA1_UUID"]'].resources.VCPU: 1 $.allocation_requests..allocations['$ENVIRON["NUMA1_UUID"]'].resources.MEMORY_MB: 1024 $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: 1 $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.MEMORY_MB: 1024 - name: no resourceless with same subtree same provider isolate GET: /allocation_candidates query_parameters: resources_COMPUTE1: VCPU:1 resources_COMPUTE2: MEMORY_MB:1024 same_subtree: _COMPUTE1,_COMPUTE2 group_policy: isolate response_json_paths: $.allocation_requests.`len`: 0 - name: resourceful without same subtree GET: /allocation_candidates query_parameters: resources: VCPU:1 resources_PORT1: CUSTOM_VF:4 required_PORT1: CUSTOM_PHYSNET1 resources_PORT2: CUSTOM_VF:4 required_PORT2: CUSTOM_PHYSNET2 group_policy: none response_json_paths: $.allocation_requests.`len`: 2 $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: [1, 1] $.allocation_requests..allocations['$ENVIRON["PF1_1_UUID"]'].resources.CUSTOM_VF: 4 $.allocation_requests..allocations['$ENVIRON["PF1_2_UUID"]'].resources.CUSTOM_VF: [4, 4] $.allocation_requests..allocations['$ENVIRON["PF3_1_UUID"]'].resources.CUSTOM_VF: 4 - name: resourceless with same subtree 4VFs GET: /allocation_candidates query_parameters: resources: VCPU:1 required_NIC: CUSTOM_HW_NIC_ROOT resources_PORT1: CUSTOM_VF:4 required_PORT1: CUSTOM_PHYSNET1 resources_PORT2: CUSTOM_VF:4 required_PORT2: CUSTOM_PHYSNET2 same_subtree: _NIC,_PORT1,_PORT2 group_policy: none response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests..allocations.`len`: 3 $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: 1 $.allocation_requests..allocations['$ENVIRON["PF1_1_UUID"]'].resources.CUSTOM_VF: 4 $.allocation_requests..allocations['$ENVIRON["PF1_2_UUID"]'].resources.CUSTOM_VF: 4 $.allocation_requests..mappings.`len`: 4 $.allocation_requests..mappings[''][0]: $ENVIRON["CN2_UUID"] $.allocation_requests..mappings['_NIC'][0]: $ENVIRON["NIC1_UUID"] $.allocation_requests..mappings['_PORT1'][0]: $ENVIRON["PF1_1_UUID"] $.allocation_requests..mappings['_PORT2'][0]: $ENVIRON["PF1_2_UUID"] - name: resourceless with same subtree 2VFs GET: /allocation_candidates query_parameters: resources: VCPU:1 required_NIC: CUSTOM_HW_NIC_ROOT resources_PORT1: CUSTOM_VF:2 required_PORT1: CUSTOM_PHYSNET1 resources_PORT2: CUSTOM_VF:2 required_PORT2: CUSTOM_PHYSNET2 same_subtree: _NIC,_PORT1,_PORT2 group_policy: none response_json_paths: $.allocation_requests.`len`: 5 $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: [1, 1, 1, 1, 1] $.allocation_requests..allocations['$ENVIRON["PF1_1_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF1_2_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF2_1_UUID"]'].resources.CUSTOM_VF: [2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_2_UUID"]'].resources.CUSTOM_VF: [2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_3_UUID"]'].resources.CUSTOM_VF: [2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_4_UUID"]'].resources.CUSTOM_VF: [2, 2] - name: resourceless with same subtree 2VFs isolate GET: /allocation_candidates query_parameters: resources: VCPU:1 required_NIC: CUSTOM_HW_NIC_ROOT resources_PORT1: CUSTOM_VF:2 required_PORT1: CUSTOM_PHYSNET1 resources_PORT2: CUSTOM_VF:2 required_PORT2: CUSTOM_PHYSNET2 same_subtree: _NIC,_PORT1,_PORT2 group_policy: isolate response_json_paths: $.allocation_requests.`len`: 5 $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: [1, 1, 1, 1, 1] $.allocation_requests..allocations['$ENVIRON["PF1_1_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF1_2_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF2_1_UUID"]'].resources.CUSTOM_VF: [2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_2_UUID"]'].resources.CUSTOM_VF: [2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_3_UUID"]'].resources.CUSTOM_VF: [2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_4_UUID"]'].resources.CUSTOM_VF: [2, 2] - name: resourceless with same subtree 2+1+1 VFs GET: /allocation_candidates query_parameters: resources: VCPU:1 required_NIC: CUSTOM_HW_NIC_ROOT resources_PORT1: CUSTOM_VF:2 required_PORT1: CUSTOM_PHYSNET1 resources_PORT2A: CUSTOM_VF:1 required_PORT2A: CUSTOM_PHYSNET2 resources_PORT2B: CUSTOM_VF:1 required_PORT2B: CUSTOM_PHYSNET2 same_subtree: _NIC,_PORT1,_PORT2A,_PORT2B group_policy: none response_json_paths: $.allocation_requests.`len`: 9 $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: [1, 1, 1, 1, 1, 1, 1, 1, 1] $.allocation_requests..allocations['$ENVIRON["PF1_1_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF1_2_UUID"]'].resources.CUSTOM_VF: 2 # The four extra candidates still have both PHYSNET1 VFs from the same provider... $.allocation_requests..allocations['$ENVIRON["PF2_1_UUID"]'].resources.CUSTOM_VF: [2, 2, 2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_3_UUID"]'].resources.CUSTOM_VF: [2, 2, 2, 2] # ...but one PHYSNET2 VF from each of PF2_2 and PF2_4 # NOTE(efried): This would be more readable as... # $.allocation_requests..allocations['$ENVIRON["PF2_2_UUID"]'].resources.CUSTOM_VF.`sorted`: [1, 1, 1, 1, 2, 2] # $.allocation_requests..allocations['$ENVIRON["PF2_4_UUID"]'].resources.CUSTOM_VF.`sorted`: [1, 1, 1, 1, 2, 2] # ...but jsonpath pukes with "TypeError: 'DatumInContext' object is not iterable" # And this `len` also blows up: # $.allocation_requests..allocations['$ENVIRON["PF2_2_UUID"]'].resources.CUSTOM_VF.`len`: 6 # $.allocation_requests..allocations['$ENVIRON["PF2_4_UUID"]'].resources.CUSTOM_VF.`len`: 6 # So instead, we use a filter to find all the allocation requests with # one VF -- there should be four of them... $.allocation_requests[?(allocations.'$ENVIRON["PF2_2_UUID"]'.resources.CUSTOM_VF<=1)]..allocations['$ENVIRON["PF2_2_UUID"]'].resources.CUSTOM_VF: [1, 1, 1, 1] $.allocation_requests[?(allocations.'$ENVIRON["PF2_4_UUID"]'.resources.CUSTOM_VF<=1)]..allocations['$ENVIRON["PF2_4_UUID"]'].resources.CUSTOM_VF: [1, 1, 1, 1] # ...and similarly to find all the allocation requests with two VFs -- # there should be two of them: $.allocation_requests[?(allocations.'$ENVIRON["PF2_2_UUID"]'.resources.CUSTOM_VF>1)]..allocations['$ENVIRON["PF2_2_UUID"]'].resources.CUSTOM_VF: [2, 2] $.allocation_requests[?(allocations.'$ENVIRON["PF2_4_UUID"]'.resources.CUSTOM_VF>1)]..allocations['$ENVIRON["PF2_4_UUID"]'].resources.CUSTOM_VF: [2, 2] - name: resourceless with same subtree 2+1+1 VFs isolate GET: /allocation_candidates query_parameters: resources: VCPU:1 required_NIC: CUSTOM_HW_NIC_ROOT resources_PORT1: CUSTOM_VF:2 required_PORT1: CUSTOM_PHYSNET1 resources_PORT2A: CUSTOM_VF:1 required_PORT2A: CUSTOM_PHYSNET2 resources_PORT2B: CUSTOM_VF:1 required_PORT2B: CUSTOM_PHYSNET2 same_subtree: _NIC,_PORT1,_PORT2A,_PORT2B group_policy: isolate response_json_paths: # Delta from above - by isolating, we lose: # - the candidate under nic1 because we can't isolate VFs on NET2 there. # - the four candidates under nic2 involving both PHYSNET2 VFs coming # from the same provider. $.allocation_requests.`len`: 4 $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: [1, 1, 1, 1] $.allocation_requests..allocations['$ENVIRON["PF2_1_UUID"]'].resources.CUSTOM_VF: [2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_3_UUID"]'].resources.CUSTOM_VF: [2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_2_UUID"]'].resources.CUSTOM_VF: [1, 1, 1, 1] $.allocation_requests..allocations['$ENVIRON["PF2_4_UUID"]'].resources.CUSTOM_VF: [1, 1, 1, 1] - name: resourceless with same subtree same provider GET: /allocation_candidates query_parameters: resources_PORT1: CUSTOM_VF:8 required_PORT2: CUSTOM_PHYSNET1 same_subtree: _PORT1,_PORT2 group_policy: none response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests..allocations.`len`: 1 $.allocation_requests..allocations['$ENVIRON["PF3_1_UUID"]'].resources.CUSTOM_VF: 8 $.allocation_requests..mappings.`len`: 2 $.allocation_requests..mappings['_PORT1'][0]: $ENVIRON["PF3_1_UUID"] $.allocation_requests..mappings['_PORT2'][0]: $ENVIRON["PF3_1_UUID"] - name: resourceless with same subtree same provider isolate GET: /allocation_candidates query_parameters: resources_PORT1: CUSTOM_VF:8 required_PORT2: CUSTOM_PHYSNET1 same_subtree: _PORT1,_PORT2 group_policy: isolate response_json_paths: $.allocation_requests.`len`: 0 - name: multiple resourceless with same subtree same provider GET: /allocation_candidates query_parameters: resources_COMPUTE1: VCPU:1 required_COMPUTE2: CUSTOM_FOO required_COMPUTE3: HW_NUMA_ROOT same_subtree: _COMPUTE1,_COMPUTE2,_COMPUTE3 group_policy: none response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests..allocations.`len`: 1 $.allocation_requests..allocations['$ENVIRON["NUMA1_UUID"]'].resources.VCPU: 1 $.allocation_requests..mappings.`len`: 3 $.allocation_requests..mappings['_COMPUTE1'][0]: $ENVIRON["NUMA1_UUID"] $.allocation_requests..mappings['_COMPUTE2'][0]: $ENVIRON["NUMA1_UUID"] $.allocation_requests..mappings['_COMPUTE3'][0]: $ENVIRON["NUMA1_UUID"] - name: multiple resourceless with same subtree same provider isolate GET: /allocation_candidates query_parameters: resources_COMPUTE1: VCPU:1 required_COMPUTE2: CUSTOM_FOO required_COMPUTE3: HW_NUMA_ROOT same_subtree: _COMPUTE1,_COMPUTE2,_COMPUTE3 group_policy: isolate response_json_paths: $.allocation_requests.`len`: 0 - name: resourceless with same subtree 2FPGAs GET: /allocation_candidates query_parameters: required_NUMA: HW_NUMA_ROOT resources_ACCEL1: CUSTOM_FPGA:1 resources_ACCEL2: CUSTOM_FPGA:1 same_subtree: _NUMA,_ACCEL1,_ACCEL2 group_policy: isolate response_json_paths: $.allocation_requests.`len`: 2 $.allocation_requests..allocations['$ENVIRON["FPGA1_0_UUID"]'].resources.CUSTOM_FPGA: [1, 1] $.allocation_requests..allocations['$ENVIRON["FPGA1_1_UUID"]'].resources.CUSTOM_FPGA: [1, 1] $.allocation_requests..mappings.`len`: [3, 3] $.allocation_requests..mappings['_NUMA'][0]: /(?:$ENVIRON['NUMA1_UUID']|$ENVIRON['NUMA1_UUID'])/ $.allocation_requests..mappings['_ACCEL1'][0]: /(?:$ENVIRON['FPGA1_0_UUID']|$ENVIRON['FPGA1_1_UUID'])/ $.allocation_requests..mappings['_ACCEL2'][0]: /(?:$ENVIRON['FPGA1_0_UUID']|$ENVIRON['FPGA1_1_UUID'])/ - name: duplicate suffixes are squashed GET: /allocation_candidates query_parameters: required_NUMA: HW_NUMA_ROOT resources_ACCEL1: CUSTOM_FPGA:1 resources_ACCEL2: CUSTOM_FPGA:1 # This test is identical to the above except for duplicated suffixes here same_subtree: _NUMA,_ACCEL1,_ACCEL2,_NUMA,_ACCEL1 group_policy: isolate response_json_paths: $.allocation_requests.`len`: 2 $.allocation_requests..allocations['$ENVIRON["FPGA1_0_UUID"]'].resources.CUSTOM_FPGA: [1, 1] $.allocation_requests..allocations['$ENVIRON["FPGA1_1_UUID"]'].resources.CUSTOM_FPGA: [1, 1] $.allocation_requests..mappings.`len`: [3, 3] $.allocation_requests..mappings['_NUMA'][0]: /(?:$ENVIRON['NUMA1_UUID']|$ENVIRON['NUMA1_UUID'])/ $.allocation_requests..mappings['_ACCEL1'][0]: /(?:$ENVIRON['FPGA1_0_UUID']|$ENVIRON['FPGA1_1_UUID'])/ $.allocation_requests..mappings['_ACCEL2'][0]: /(?:$ENVIRON['FPGA1_0_UUID']|$ENVIRON['FPGA1_1_UUID'])/ - name: resourceless with same subtree 2FPGAs forbidden GET: /allocation_candidates query_parameters: required_NUMA: HW_NUMA_ROOT,!CUSTOM_FOO resources_ACCEL1: CUSTOM_FPGA:1 resources_ACCEL2: CUSTOM_FPGA:1 same_subtree: _NUMA,_ACCEL1,_ACCEL2 group_policy: isolate response_json_paths: $.allocation_requests.`len`: 0 - name: multiple same_subtree qparams GET: /allocation_candidates query_parameters: required_NUMA: HW_NUMA_ROOT resources_COMPUTE: VCPU:2,MEMORY_MB:512 resources_FPGA: CUSTOM_FPGA:1 resources_GPU: VGPU:1 required_SRIOV: CUSTOM_VNIC_TYPE_DIRECT resources_NET1: NET_BW_EGR_KILOBIT_PER_SEC:100 required_NET1: CUSTOM_PHYSNET1 resources_NET2: NET_BW_EGR_KILOBIT_PER_SEC:100 required_NET2: CUSTOM_PHYSNET2 same_subtree: # Compute and accel resources from the same NUMA node - _NUMA,_COMPUTE,_GPU,_FPGA # Bandwidth resources under the same agent - _SRIOV,_NET1,_NET2 group_policy: none response_json_paths: # There's only one way this shakes out $.allocation_requests.`len`: 1 $.allocation_requests[0].allocations['$ENVIRON['NUMA0_UUID']']: resources: VCPU: 2 MEMORY_MB: 512 $.allocation_requests[0].allocations['$ENVIRON['FPGA0_UUID']']: resources: CUSTOM_FPGA: 1 $.allocation_requests[0].allocations['$ENVIRON['PGPU0_UUID']']: resources: VGPU: 1 $.allocation_requests[0].allocations['$ENVIRON['ESN1_UUID']']: resources: NET_BW_EGR_KILOBIT_PER_SEC: 100 $.allocation_requests[0].allocations['$ENVIRON['ESN2_UUID']']: resources: NET_BW_EGR_KILOBIT_PER_SEC: 100 $.allocation_requests[0].mappings: _NUMA: ["$ENVIRON['NUMA0_UUID']"] _COMPUTE: ["$ENVIRON['NUMA0_UUID']"] _FPGA: ["$ENVIRON['FPGA0_UUID']"] _GPU: ["$ENVIRON['PGPU0_UUID']"] _SRIOV: ["$ENVIRON['SRIOV_AGENT_UUID']"] _NET1: ["$ENVIRON['ESN1_UUID']"] _NET2: ["$ENVIRON['ESN2_UUID']"] # The next two tests are isolated to cn2 (only cn2 has HW_NIC_ROOT and VFs) and # demonstrate the difference between same_subtree=A,B&same_subtree=B,C and # same_subtree=A,B,C. - name: overlapping same_subtreeZ GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 required_NIC: CUSTOM_HW_NIC_ROOT resources_PORT1: CUSTOM_VF:2 required_PORT1: CUSTOM_PHYSNET1 # In this test we use distinct but overlapping same_subtreeZ. same_subtree: # This ties each NIC to cn2, which would have happened anyway - _NIC,_COMPUTE # This ties each PF to its parent NIC - _NIC,_PORT1 group_policy: none response_json_paths: $.provider_summaries.`len`: 11 $.allocation_requests.`len`: 4 $.allocation_requests..mappings._COMPUTE: # 4 cn2_uuid each as a list, no other computes - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: [1, 1, 1, 1] $.allocation_requests..allocations['$ENVIRON["PF1_1_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF3_1_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF2_1_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF2_3_UUID"]'].resources.CUSTOM_VF: 2 - name: combined same_subtree GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 required_NIC: CUSTOM_HW_NIC_ROOT resources_PORT1: CUSTOM_VF:2 required_PORT1: CUSTOM_PHYSNET1 # In this test we use a single same_subtree that is the union of the two # in the test above. This allows permutations where one NIC satisfies # CUSTOM_HW_NIC_ROOT, but a PF under a *different* NIC satisfies the VFs. # This is because _COMPUTE acts as the common ancestor, since it is part # of the same same_subtree. same_subtree: - _NIC,_COMPUTE,_PORT1 group_policy: none response_json_paths: $.provider_summaries.`len`: 11 $.allocation_requests.`len`: 12 $.allocation_requests..mappings._COMPUTE: # 12 cn2_uuid each as a list, no other computes - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] $.allocation_requests..allocations['$ENVIRON["PF1_1_UUID"]'].resources.CUSTOM_VF: [2, 2, 2] $.allocation_requests..allocations['$ENVIRON["PF3_1_UUID"]'].resources.CUSTOM_VF: [2, 2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_1_UUID"]'].resources.CUSTOM_VF: [2, 2, 2] $.allocation_requests..allocations['$ENVIRON["PF2_3_UUID"]'].resources.CUSTOM_VF: [2, 2, 2] - name: same_subtree with an ancestry hole GET: /allocation_candidates query_parameters: required_MULTI_ATTACH: COMPUTE_VOLUME_MULTI_ATTACH resources_BW: NET_BW_EGR_KILOBIT_PER_SEC:100 resources_COMPUTE: VCPU:4 same_subtree: _MULTI_ATTACH,_BW,_COMPUTE group_policy: isolate response_json_paths: $.allocation_requests.`len`: 3 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/granular.yaml0000664000175000017500000005446200000000000027236 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # Tests for granular resource requests fixtures: # See the layout diagram in this fixture's docstring in ../fixtures.py - GranularFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json openstack-api-version: placement 1.25 tests: - name: different groups hit with group_policy=none GET: /allocation_candidates query_parameters: resources1: VCPU:1 resources2: MEMORY_MB:1024 group_policy: none status: 200 response_json_paths: $.allocation_requests.`len`: 3 $.provider_summaries.`len`: 3 $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources: VCPU: 1 MEMORY_MB: 1024 $.allocation_requests..allocations["$ENVIRON['CN_MIDDLE']"].resources: VCPU: 1 MEMORY_MB: 1024 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources: VCPU: 1 MEMORY_MB: 1024 $.provider_summaries["$ENVIRON['CN_LEFT']"].resources: VCPU: capacity: 8 used: 0 MEMORY_MB: capacity: 4096 used: 0 $.provider_summaries["$ENVIRON['CN_MIDDLE']"].resources: VCPU: capacity: 8 used: 0 MEMORY_MB: capacity: 4096 used: 0 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources: VCPU: capacity: 8 used: 0 MEMORY_MB: capacity: 4096 used: 0 - name: different groups miss with group_policy=isolate GET: /allocation_candidates query_parameters: resources1: VCPU:1 resources2: MEMORY_MB:1024 group_policy: isolate status: 200 response_json_paths: # We asked for VCPU and MEMORY_MB to be satisfied by *different* # providers, because they're in separate numbered request groups and # group_policy=isolate. Since there are no sharing providers of these # resources, we get no results. $.allocation_requests.`len`: 0 $.provider_summaries.`len`: 0 - name: multiple group_policy picks the first one # NOTE(efried): gabbi query_parameters doesn't preserve param order GET: /allocation_candidates?resources1=VCPU:1&resources2=MEMORY_MB:1024&group_policy=isolate&group_policy=none status: 200 response_json_paths: $.allocation_requests.`len`: 0 $.provider_summaries.`len`: 0 - name: resources combine GET: /allocation_candidates query_parameters: resources: VCPU:3,MEMORY_MB:512 resources1: VCPU:1,MEMORY_MB:1024 resources2: VCPU:2 group_policy: none status: 200 response_json_paths: $.allocation_requests.`len`: 3 $.provider_summaries.`len`: 3 $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources: VCPU: 6 MEMORY_MB: 1536 $.allocation_requests..allocations["$ENVIRON['CN_MIDDLE']"].resources: VCPU: 6 MEMORY_MB: 1536 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources: VCPU: 6 MEMORY_MB: 1536 - name: group policy not required with only one numbered group GET: /allocation_candidates?resources=VCPU:1&resources1=MEMORY_MB:2048 status: 200 response_json_paths: $.allocation_requests.`len`: 3 $.provider_summaries.`len`: 3 - name: disk sharing isolated GET: /allocation_candidates query_parameters: resources1: VCPU:1,MEMORY_MB:1024 resources2: DISK_GB:100 group_policy: isolate status: 200 response_json_paths: # Here we've asked for VCPU and MEMORY_MB to be satisfied by the same # provider - all three of our non-sharing providers can do that - and # the DISK_GB to be satisfied by a *different* provider than the VCPU and # MEMORY_MB. So we'll get all permutations where cn_* provide VCPU and # MEMORY_MB and shr_disk_* provide the DISK_GB; but *no* results where # DISK_GB is provided by the cn_*s themselves. $.allocation_requests.`len`: 5 $.provider_summaries.`len`: 5 - name: disk sharing non-isolated GET: /allocation_candidates query_parameters: resources1: VCPU:1,MEMORY_MB:1024 resources2: DISK_GB:100 group_policy: none status: 200 response_json_paths: $.allocation_requests.`len`: 7 $.provider_summaries.`len`: 5 - name: disk alone GET: /allocation_candidates query_parameters: resources1: DISK_GB:800 status: 200 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 2 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: 800 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB]: 800 - name: disk alone non-granular GET: /allocation_candidates query_parameters: resources: DISK_GB:800 status: 200 response_json_paths: $.allocation_requests.`len`: 2 $.provider_summaries.`len`: 2 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: 800 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB]: 800 - name: isolated ssd GET: /allocation_candidates query_parameters: resources1: VCPU:1,MEMORY_MB:1024 resources2: DISK_GB:100 required2: CUSTOM_DISK_SSD group_policy: isolate status: 200 response_json_paths: # We get candidates [cn_left + shr_disk_1] and [cn_middle + shr_disk_1] # We don't get [cn_right + shr_disk_1] because they're not associated via aggregate. # We don't get [cn_left/middle + shr_disk_2] because shr_disk_2 doesn't have the SSD trait # We don't get [cn_left] or [cn_right] even though they have SSD disk because we asked to isolate $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources: VCPU: 1 MEMORY_MB: 1024 $.allocation_requests..allocations["$ENVIRON['CN_MIDDLE']"].resources: VCPU: 1 MEMORY_MB: 1024 # shr_disk_1 satisfies the disk for both allocation requests $.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: [100, 100] $.provider_summaries.`len`: 3 $.provider_summaries["$ENVIRON['CN_LEFT']"].resources: VCPU: capacity: 8 used: 0 MEMORY_MB: capacity: 4096 used: 0 DISK_GB: capacity: 500 used: 0 $.provider_summaries["$ENVIRON['CN_MIDDLE']"].resources: VCPU: capacity: 8 used: 0 MEMORY_MB: capacity: 4096 used: 0 $.provider_summaries["$ENVIRON['SHR_DISK_1']"].resources: DISK_GB: capacity: 1000 used: 0 - name: no isolation, forbid ssd GET: /allocation_candidates query_parameters: resources1: VCPU:1 resources2: DISK_GB:100 required2: "!CUSTOM_DISK_SSD" group_policy: none status: 200 response_json_paths: # The permutations we *don't* get are: # cn_right by itself because it has SSD # - anything involving shr_disk_1 because it has SSD $.allocation_requests.`len`: 4 # We get two allocation requests involving cn_left - one where it # satisfies the disk itself and one where shr_disk_2 provides it $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[VCPU]: [1, 1] # We get one for [cn_middle + shr_disk_2] - it doesn't have disk to provide for itself $.allocation_requests..allocations["$ENVIRON['CN_MIDDLE']"].resources[VCPU]: 1 # We get one for [cn_right + shr_disk_2] - cn_right can't provide its own # disk due to the forbidden SSD trait. $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[VCPU]: 1 # shr_disk_2 satisfies the disk for three out of the four allocation # requests (all except the one where cn_left provides for itself) $.allocation_requests..allocations["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB]: [100, 100, 100] # Validate that we got the correct four providers in the summaries $.provider_summaries.`len`: 4 $.provider_summaries["$ENVIRON['CN_LEFT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_MIDDLE']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB][capacity]: 1000 - name: member_of filters GET: /allocation_candidates query_parameters: resources1: VCPU:1 resources2: DISK_GB:100 member_of2: $ENVIRON['AGGC'] group_policy: none status: 200 response_json_paths: $.allocation_requests.`len`: 1 $.allocation_requests[0].allocations["$ENVIRON['CN_RIGHT']"].resources: VCPU: 1 DISK_GB: 100 $.provider_summaries.`len`: 1 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[DISK_GB][capacity]: 500 - name: required, forbidden, member_of in GET: /allocation_candidates query_parameters: resources1: VCPU:1 required1: "!HW_CPU_X86_SSE" resources2: DISK_GB:100 required2: CUSTOM_DISK_SSD member_of2: in:$ENVIRON['AGGA'],$ENVIRON['AGGC'] group_policy: none status: 200 response_json_paths: # cn_middle won't appear (forbidden SSE trait) # shr_disk_2 won't appear (required SSD trait is absent) # [cn_left] won't be in the results (required SSD trait is absent) # So we'll get: # [cn_left, shr_disk_1] # [cn_right] $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: 100 $.provider_summaries.`len`: 3 $.provider_summaries["$ENVIRON['CN_LEFT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[DISK_GB][capacity]: 500 $.provider_summaries["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB][capacity]: 1000 - name: required, forbidden, member_of in long suffix desc: same as above, but using complex suffixes GET: /allocation_candidates query_parameters: resources_compute: VCPU:1 required_compute: "!HW_CPU_X86_SSE" resources_disk: DISK_GB:100 required_disk: CUSTOM_DISK_SSD member_of_disk: in:$ENVIRON['AGGA'],$ENVIRON['AGGC'] group_policy: none request_headers: openstack-api-version: placement 1.33 status: 200 response_json_paths: $.allocation_requests.`len`: 2 $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: 100 $.provider_summaries.`len`: 3 $.provider_summaries["$ENVIRON['CN_LEFT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[DISK_GB][capacity]: 500 $.provider_summaries["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB][capacity]: 1000 - name: multiple member_of GET: /allocation_candidates query_parameters: resources1: VCPU:1 resources2: DISK_GB:100 member_of2: - in:$ENVIRON['AGGB'],$ENVIRON['AGGC'] - $ENVIRON['AGGA'] group_policy: isolate status: 200 response_json_paths: # The member_of2 specifications say that the DISK_GB resource must come # from a provider that's in aggA and also in (aggB and/or aggC). Only # shr_disk_2 qualifies; so we'll get results anchored at cn_middle and # cn_right. But note that we'll also get a result anchored at cn_left: # it doesn't meet the member_of criteria, but it doesn't need to, since # it's not providing the DISK_GB resource. $.allocation_requests.`len`: 3 $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['CN_MIDDLE']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB]: [100, 100, 100] $.provider_summaries.`len`: 4 $.provider_summaries["$ENVIRON['CN_LEFT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_MIDDLE']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB][capacity]: 1000 - name: multiple disks, multiple networks GET: /allocation_candidates query_parameters: resources1: VCPU:1 resources2: VGPU:1 required2: HW_GPU_API_DXVA resources3: MEMORY_MB:1024 resources4: DISK_GB:100 required4: CUSTOM_DISK_SSD resources5: DISK_GB:50 required5: "!CUSTOM_DISK_SSD" resources6: SRIOV_NET_VF:1,CUSTOM_NET_MBPS:1000 resources7: SRIOV_NET_VF:2,CUSTOM_NET_MBPS:2000 group_policy: none # Breaking it down: # => These could come from cn_left, cn_middle, or cn_right # ?resources1=VCPU:1 # &resources3=MEMORY_MB:1024 # => But this limits us to cn_left and cn_right # &resources2=VGPU:1&required2=HW_GPU_API_DXVA # => Since we're not isolating, this SSD can come from cn_right or shr_disk_1 # &resources4=DISK_GB:100&required4=CUSTOM_DISK_SSD # => This non-SSD can come from cn_left or shr_disk_2 # &resources5=DISK_GB:50&required5=!CUSTOM_DISK_SSD # => These VFs and bandwidth can come from cn_left or shr_net. Since cn_left # can't be an anchor for shr_net, these will always combine. # &resources6=SRIOV_NET_VF:1,CUSTOM_NET_MBPS:1000 # &resources7=SRIOV_NET_VF:2,CUSTOM_NET_MBPS:2000 # => If we didn't do this, the separated VCPU/MEMORY_MB/VGPU resources would # cause us to get no results # &group_policy=none status: 200 response_json_paths: # We have two permutations involving cn_left. # - One where the non-SSD is satisfied by cn_left itself # [cn_left(VCPU:1, MEMORY_MB:1024, VGPU:1, DISK_GB:50, SRIOV_NET_VF:3, CUSTOM_NET_MBPS:3000), # shr_disk_1(DISK_GB:100)] # - And one where the non-SSD is satisfied by shr_disk_2 # [cn_left(VCPU:1, MEMORY_MB:1024, VGPU:1, SRIOV_NET_VF:3, CUSTOM_NET_MBPS:3000), # shr_disk_1(DISK_GB:100), # shr_disk_2(DISK_GB: 50)] # There's only one result involving cn_right. # - We must satisfy the SSD from cn_right and the non-SSD from shr_disk_2 # - We must satisfy the network stuff from shr_net # [cn_right(VCPU:1, MEMORY_MB:1024, VGPU:1, DISK_GB:100), # shr_disk_2(DISK_GB:50), # shr_net(SRIOV_NET_VF:3, CUSTOM_NET_MBPS:3000)] $.allocation_requests.`len`: 3 $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[VCPU]: [1, 1] $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[MEMORY_MB]: [1024, 1024] $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[VGPU]: [1, 1] $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[SRIOV_NET_VF]: [3, 3] $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[CUSTOM_NET_MBPS]: [3000, 3000] $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[DISK_GB]: 50 # These come from the cn_left results $.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: [100, 100] # One of these comes from the second cn_left result, the other from the cn_right result $.allocation_requests..allocations["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB]: [50, 50] $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[VCPU]: 1 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[MEMORY_MB]: 1024 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[VGPU]: 1 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[DISK_GB]: 100 $.allocation_requests..allocations["$ENVIRON['SHR_NET']"].resources[SRIOV_NET_VF]: 3 $.allocation_requests..allocations["$ENVIRON['SHR_NET']"].resources[CUSTOM_NET_MBPS]: 3000 # Just make sure we got the correct four providers in the summaries $.provider_summaries.`len`: 5 $.provider_summaries["$ENVIRON['CN_LEFT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB][capacity]: 1000 $.provider_summaries["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB][capacity]: 1000 $.provider_summaries["$ENVIRON['SHR_NET']"].resources[SRIOV_NET_VF][capacity]: 16 - name: combining request groups exceeds capacity GET: /allocation_candidates query_parameters: resources: VCPU:2,MEMORY_MB:2048,SRIOV_NET_VF:1,CUSTOM_NET_MBPS:2000 resources1: SRIOV_NET_VF:1,CUSTOM_NET_MBPS:3000 status: 200 response_json_paths: # CUSTOM_NET_MBPS of 2000 + 3000 = 5000 is too much for cn_left, but # shr_net can accommodate it. $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[VCPU]: 2 $.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[MEMORY_MB]: 2048 $.allocation_requests..allocations["$ENVIRON['SHR_NET']"].resources[SRIOV_NET_VF]: 2 $.allocation_requests..allocations["$ENVIRON['SHR_NET']"].resources[CUSTOM_NET_MBPS]: 5000 $.provider_summaries.`len`: 2 $.provider_summaries["$ENVIRON['CN_RIGHT']"].resources[VCPU][capacity]: 8 $.provider_summaries["$ENVIRON['SHR_NET']"].resources[CUSTOM_NET_MBPS][capacity]: 40000 - name: combining request groups exceeds max_unit GET: /allocation_candidates query_parameters: resources: VGPU:1 resources1: VGPU:1 resources2: VGPU:1 group_policy: none status: 200 response_json_paths: # VGPU of 1 + 1 + 1 = 3 exceeds max_unit on cn_right, but cn_left can handle it. $.allocation_requests.`len`: 1 $.allocation_requests..allocations["$ENVIRON['CN_LEFT']"].resources[VGPU]: 3 $.provider_summaries.`len`: 1 $.provider_summaries["$ENVIRON['CN_LEFT']"].resources[VGPU][capacity]: 8 ################# # Error scenarios ################# - name: numbered resources bad microversion GET: /allocation_candidates?resources=MEMORY_MB:1024&resources1=VCPU:1 request_headers: openstack-api-version: placement 1.24 status: 400 response_strings: - Invalid query string parameters - "'resources1' was unexpected" - name: numbered traits bad microversion GET: /allocation_candidates?resources=MEMORY_MB:1024&required2=HW_CPU_X86_AVX2 request_headers: openstack-api-version: placement 1.24 status: 400 response_strings: - Invalid query string parameters - "'required2' was unexpected" - name: numbered member_of bad microversion GET: /allocation_candidates?resources=MEMORY_MB:1024&member_of3=$ENVIRON['AGGB'] request_headers: openstack-api-version: placement 1.24 status: 400 response_strings: - Invalid query string parameters - "'member_of3' was unexpected" - name: group_policy bad microversion GET: /allocation_candidates?resources=VCPU:1&group_policy=isolate request_headers: openstack-api-version: placement 1.24 status: 400 response_strings: - Invalid query string parameters - "'group_policy' was unexpected" - name: bogus numbering GET: /allocation_candidates?resources01=VCPU:1 status: 400 response_strings: - Invalid query string parameters - "'resources01' does not match any of the regexes" - name: bogus suffix desc: this is bogus because of unsupported character GET: /allocation_candidates?resources1@=VCPU:1 request_headers: openstack-api-version: placement 1.33 status: 400 response_strings: - Invalid query string parameters - "'resources1@' does not match any of the regexes" - "^member_of([a-zA-Z0-9_-]{1,64})?$" - name: bogus length desc: 65 character suffix is too long GET: /allocation_candidates?resources_0123456701234567012345670123456701234567012345670123456701234567=VCPU:1 request_headers: openstack-api-version: placement 1.33 status: 400 response_strings: - Invalid query string parameters - "'resources_0123456701234567012345670123456701234567012345670123456701234567' does not match any of the regexes" - "^member_of([a-zA-Z0-9_-]{1,64})?$" - name: invalid group_policy value GET: /allocation_candidates?resources=VCPU:1&group_policy=bogus status: 400 response_strings: - Invalid query string parameters - "'bogus' is not one of ['none', 'isolate']" - name: group_policy required when more than one numbered group GET: /allocation_candidates?resources1=VCPU:1&resources2=VCPU:1 status: 400 response_strings: - The \"group_policy\" parameter is required when specifying more than one \"resources{N}\" parameter. - name: orphaned traits keys GET: /allocation_candidates?required=FOO&required1=BAR status: 400 response_strings: - 'Found the following orphaned traits keys: required, required1' - name: orphaned member_of keys GET: /allocation_candidates?member_of=$ENVIRON['AGGA']&member_of3=$ENVIRON['AGGC'] status: 400 response_strings: - 'Found the following orphaned member_of keys: member_of, member_of3' - name: at least one request group required GET: /allocation_candidates?group_policy=isolate status: 400 response_strings: - At least one request group (`resources` or `resources{$S}`) is required. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/inventory-legacy-rbac.yaml0000664000175000017500000002767000000000000031630 0ustar00zuulzuul00000000000000--- fixtures: - LegacyRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: # create resource provider - name: system admin can create resource providers POST: /resource_providers request_headers: *system_admin_headers data: name: fc65b9c3-2d41-44b1-96ca-1d1a13b4dd69 uuid: 85475179-de26-4f7a-8c11-b4dc10fe47f4 status: 200 - name: system reader cannot create resource providers POST: /resource_providers request_headers: *system_reader_headers data: name: de40da45-e029-450d-b147-178136518e4d uuid: 7d7e6957-45b0-4791-b79a-69a88327ab0d status: 403 - name: project admin can create resource providers POST: /resource_providers request_headers: *project_admin_headers data: name: f4720d4c-3a29-4676-aeb1-faa39084051e uuid: 0e4fdc4e-5790-477a-9e4f-4f6898537ad9 status: 200 - name: project member cannot create resource providers POST: /resource_providers request_headers: *project_member_headers data: name: cf4511a9-a4f8-402c-ae03-233eb97e2358 uuid: 6bb64c0f-4704-4337-8bae-18bbc6131a32 status: 403 - name: project reader cannot create resource providers POST: /resource_providers request_headers: *project_reader_headers data: name: 53519f75-dcd3-45dc-b355-8c0e2628a8e8 uuid: 29742738-d409-4e2e-b4bc-b941ee9268fa status: 403 # list inventory - name: system admin can list inventories GET: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories request_headers: *system_admin_headers response_json_paths: $.resource_provider_generation: 0 $.inventories: {} - name: system reader cannot list inventories GET: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories request_headers: *system_reader_headers status: 403 - name: project admin can list inventories GET: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_admin_headers response_json_paths: $.resource_provider_generation: 0 $.inventories: {} - name: project member cannot list inventories GET: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_member_headers status: 403 - name: project reader cannot list inventories GET: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_reader_headers status: 403 # create inventory - name: system admin can create an inventory POST: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories request_headers: *system_admin_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 response_headers: location: $SCHEME://$NETLOC/resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories/DISK_GB - name: system reader cannot create an inventory POST: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories request_headers: *system_reader_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: project admin can create an inventory POST: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_admin_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 response_headers: location: $SCHEME://$NETLOC/resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB - name: project member cannot create an inventory POST: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_member_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: project reader cannot create an inventory POST: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_reader_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 # show inventory - name: system admin can show inventory GET: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories/DISK_GB request_headers: *system_admin_headers status: 200 - name: system reader cannot show inventory GET: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories/DISK_GB request_headers: *system_reader_headers status: 403 - name: project admin can show inventory GET: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB request_headers: *project_admin_headers status: 200 - name: project member cannot show inventory GET: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB request_headers: *project_member_headers status: 403 - name: project reader cannot show inventory GET: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB request_headers: *project_reader_headers status: 403 # update inventory - name: system admin can update inventory PUT: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories/DISK_GB request_headers: *system_admin_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 200 - name: system reader cannot update inventory PUT: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories/DISK_GB request_headers: *system_reader_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: project admin can update inventory PUT: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB request_headers: *project_admin_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 200 - name: project member cannot update inventory PUT: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB request_headers: *project_member_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: project reader cannot update inventory PUT: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB request_headers: *project_reader_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 # update all inventories - name: system admin can update all inventories PUT: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories request_headers: *system_admin_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 200 - name: system reader cannot update all inventories PUT: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories request_headers: *system_reader_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 403 - name: project admin can update all inventories PUT: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_admin_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 200 - name: project member cannot update all inventories PUT: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_member_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 403 - name: project reader cannot update all inventories PUT: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_reader_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 403 # delete inventory - name: system admin can delete a specific inventory DELETE: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories/DISK_GB request_headers: *system_admin_headers status: 204 - name: system reader cannot delete a specific inventory DELETE: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories/DISK_GB request_headers: *system_reader_headers status: 403 - name: project admin can delete a specific inventory DELETE: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB request_headers: *project_admin_headers status: 204 - name: project member cannot delete a specific inventory DELETE: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB request_headers: *project_member_headers status: 403 - name: project reader cannot delete a specific inventory DELETE: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories/DISK_GB request_headers: *project_reader_headers status: 403 # delete all inventory # - name: system admin can delete all inventory DELETE: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories request_headers: *system_admin_headers status: 204 - name: system reader cannot delete all inventory DELETE: /resource_providers/85475179-de26-4f7a-8c11-b4dc10fe47f4/inventories request_headers: *system_reader_headers status: 403 - name: project admin can delete all inventory DELETE: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_admin_headers status: 204 - name: project member cannot delete all inventory DELETE: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_member_headers status: 403 - name: project reader cannot delete all inventory DELETE: /resource_providers/0e4fdc4e-5790-477a-9e4f-4f6898537ad9/inventories request_headers: *project_reader_headers status: 403 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/inventory-policy.yaml0000664000175000017500000000410300000000000030740 0ustar00zuulzuul00000000000000# This tests the individual CRUD operations on # /resource_providers/{uuid}/inventories* using a non-admin user with an # open policy configuration. The response validation is intentionally minimal. fixtures: - OpenPolicyFixture defaults: request_headers: x-auth-token: user accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: post new resource provider POST: /resource_providers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: list inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories response_json_paths: $.resource_provider_generation: 0 $.inventories: {} - name: post an inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 response_headers: location: $SCHEME://$NETLOC/resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB - name: show inventory GET: $LOCATION status: 200 - name: update one inventory PUT: $LAST_URL request_headers: content-type: application/json data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 200 - name: update all inventory PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 200 - name: delete specific inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB status: 204 - name: delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/inventory-secure-rbac.yaml0000664000175000017500000003436300000000000031647 0ustar00zuulzuul00000000000000--- fixtures: - SecureRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &admin_project_id $ENVIRON['ADMIN_PROJECT_ID'] - &service_project_id $ENVIRON['SERVICE_PROJECT_ID'] - &admin_headers x-auth-token: user x-roles: admin x-project-id: admin_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &service_headers x-auth-token: user x-roles: service x-project-id: service_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: admin can create resource providers POST: /resource_providers request_headers: *admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: service can create resource providers POST: /resource_providers request_headers: *service_headers data: name: $ENVIRON['RP_NAME1'] uuid: $ENVIRON['RP_UUID1'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID1'] - name: project admin can create resource providers POST: /resource_providers request_headers: *project_admin_headers data: name: $ENVIRON['RP_NAME2'] uuid: $ENVIRON['RP_UUID2'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID2'] - name: admin can list inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *admin_headers response_json_paths: $.resource_provider_generation: 0 $.inventories: {} - name: service can list inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *service_headers response_json_paths: $.resource_provider_generation: 0 $.inventories: {} - name: system reader cannot list inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *system_reader_headers status: 403 - name: project admin can list inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_admin_headers response_json_paths: $.resource_provider_generation: 0 $.inventories: {} - name: project member cannot list inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_member_headers status: 403 - name: project reader cannot list inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_reader_headers status: 403 - name: project admin can create an inventory POST: /resource_providers/$ENVIRON['RP_UUID2']/inventories request_headers: *project_admin_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 response_headers: location: $SCHEME://$NETLOC/resource_providers/$ENVIRON['RP_UUID2']/inventories/DISK_GB - name: project member cannot create an inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_member_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: project reader cannot create an inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_reader_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: system reader cannot create an inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *system_reader_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: system admin cannot create an inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *system_admin_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: admin can create an inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *admin_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 response_headers: location: $SCHEME://$NETLOC/resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB - name: service can create an inventory POST: /resource_providers/$ENVIRON['RP_UUID1']/inventories request_headers: *service_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 response_headers: location: $SCHEME://$NETLOC/resource_providers/$ENVIRON['RP_UUID1']/inventories/DISK_GB - name: project admin can show inventory GET: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *project_admin_headers status: 200 - name: project member cannot show inventory GET: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *project_member_headers status: 403 - name: project reader cannot show inventory GET: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *project_reader_headers status: 403 - name: system reader cannot show inventory GET: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *system_reader_headers status: 403 - name: system admin cannot show inventory GET: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *system_admin_headers status: 403 - name: admin can show inventory GET: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *admin_headers status: 200 - name: service can show inventory GET: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *service_headers status: 200 - name: project admin can update inventory PUT: /resource_providers/$ENVIRON['RP_UUID2']/inventories/DISK_GB request_headers: *project_admin_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 200 - name: project member cannot update inventory PUT: $LAST_URL request_headers: *project_member_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: project reader cannot update inventory PUT: $LAST_URL request_headers: *project_reader_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: system reader cannot update inventory PUT: $LAST_URL request_headers: *system_reader_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: system admin cannot update inventory PUT: $LAST_URL request_headers: *system_admin_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 403 - name: admin can update inventory PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *admin_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 200 - name: service can update inventory PUT: /resource_providers/$ENVIRON['RP_UUID1']/inventories/DISK_GB request_headers: *service_headers data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 200 - name: project admin can update all inventories PUT: /resource_providers/$ENVIRON['RP_UUID2']/inventories request_headers: *project_admin_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 200 - name: project member cannot update all inventories PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_member_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 403 - name: project reader cannot update all inventories PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_reader_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 403 - name: system reader cannot update all inventories PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *system_reader_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 403 - name: system admin cannot update all inventories PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *system_admin_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 403 - name: admin can update all inventories PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *admin_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 200 - name: service can update all inventories PUT: /resource_providers/$ENVIRON['RP_UUID1']/inventories request_headers: *service_headers data: resource_provider_generation: 2 inventories: DISK_GB: total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 VCPU: total: 8 status: 200 - name: project admin can delete a specific inventory DELETE: /resource_providers/$ENVIRON['RP_UUID2']/inventories/DISK_GB request_headers: *project_admin_headers status: 204 - name: project member cannot delete a specific inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *project_member_headers status: 403 - name: project reader cannot delete a specific inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *project_reader_headers status: 403 - name: system reader cannot delete a specific inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *system_reader_headers status: 403 - name: system admin cannot delete a specific inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *system_admin_headers status: 403 - name: admin can delete a specific inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB request_headers: *admin_headers status: 204 - name: service can delete a specific inventory DELETE: /resource_providers/$ENVIRON['RP_UUID1']/inventories/DISK_GB request_headers: *service_headers status: 204 - name: project admin can delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID2']/inventories request_headers: *project_admin_headers status: 204 - name: project member cannot delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_member_headers status: 403 - name: project reader cannot delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_reader_headers status: 403 - name: system reader cannot delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *system_reader_headers status: 403 - name: system admin cannot delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *system_admin_headers status: 403 - name: admin can delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *admin_headers status: 204 - name: service can delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID1']/inventories request_headers: *service_headers status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/inventory.yaml0000664000175000017500000005347400000000000027462 0ustar00zuulzuul00000000000000fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json tests: - name: inventories for missing provider GET: /resource_providers/7260669a-e3d4-4867-aaa7-683e2ab6958c/inventories status: 404 response_strings: - No resource provider with uuid 7260669a-e3d4-4867-aaa7-683e2ab6958c found response_json_paths: $.errors[0].title: Not Found - name: delete all inventory for missing resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: openstack-api-version: placement 1.5 status: 404 - name: post new resource provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 201 response_headers: location: //resource_providers/[a-f0-9-]+/ - name: get empty inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories response_json_paths: $.resource_provider_generation: 0 $.inventories: {} - name: post a conflicting capacity inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 256 reserved: 512 status: 400 response_strings: - Unable to create inventory for resource provider response_json_paths: $.errors[0].title: Bad Request - name: post an inventory with no total specified POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB status: 400 response_strings: - JSON does not validate - "'total' is a required property" - name: post a negative inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: -1 status: 400 response_strings: - JSON does not validate - -1 is less than the minimum of 1 - name: post an inventory with invalid total POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 0 reserved: 512 min_unit: 1 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 400 response_strings: - "JSON does not validate: 0 is less than the minimum of 1" - "Failed validating 'minimum' in schema['properties']['total']" - name: post an inventory invalid min_unit POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 0 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 400 response_strings: - "JSON does not validate: 0 is less than the minimum of 1" - "Failed validating 'minimum' in schema['properties']['min_unit']" - name: post an inventory invalid max_unit POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 0 step_size: 10 allocation_ratio: 1.0 status: 400 response_strings: - "JSON does not validate: 0 is less than the minimum of 1" - "Failed validating 'minimum' in schema['properties']['max_unit']" - name: post an inventory invalid step_size POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 0 allocation_ratio: 1.0 status: 400 response_strings: - "JSON does not validate: 0 is less than the minimum of 1" - "Failed validating 'minimum' in schema['properties']['step_size']" - name: post an inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 response_headers: location: $SCHEME://$NETLOC/resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB response_json_paths: $.resource_provider_generation: 1 $.total: 2048 $.reserved: 512 - name: get that inventory GET: $LOCATION status: 200 request_headers: # set microversion to 1.15 to get timestamp headers openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ response_json_paths: $.resource_provider_generation: 1 $.total: 2048 $.reserved: 512 $.min_unit: 10 $.max_unit: 1024 $.step_size: 10 $.allocation_ratio: 1.0 - name: get inventory v1.14 no cache headers GET: $LAST_URL status: 200 request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - cache-control - last-modified - name: modify the inventory PUT: $LAST_URL request_headers: content-type: application/json data: resource_provider_generation: 1 total: 2048 reserved: 1024 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 200 response_headers: content-type: /application/json/ response_json_paths: $.reserved: 1024 - name: confirm inventory change GET: $LAST_URL response_json_paths: $.resource_provider_generation: 2 $.total: 2048 $.reserved: 1024 - name: modify inventory invalid generation PUT: $LAST_URL request_headers: content-type: application/json openstack-api-version: placement 1.23 data: resource_provider_generation: 5 total: 2048 status: 409 response_strings: - resource provider generation conflict response_json_paths: $.errors[0].title: Conflict $.errors[0].code: placement.concurrent_update - name: modify inventory no such resource class in inventory PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories/MEMORY_MB request_headers: content-type: application/json data: resource_provider_generation: 2 total: 2048 status: 400 response_strings: - No inventory record with resource class response_json_paths: $.errors[0].title: Bad Request - name: modify inventory invalid data desc: This should 400 because reserved is greater than total PUT: $LAST_URL request_headers: content-type: application/json data: resource_provider_generation: 2 total: 2048 reserved: 4096 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 400 response_strings: - Unable to update inventory for resource provider $ENVIRON['RP_UUID'] response_json_paths: $.errors[0].title: Bad Request - name: put inventory bad form desc: This should 400 because reserved is greater than total PUT: $LAST_URL request_headers: content-type: application/json data: house: red car: blue status: 400 response_strings: - JSON does not validate response_json_paths: $.errors[0].title: Bad Request - name: post inventory malformed json POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: '{"foo": }' status: 400 response_strings: - Malformed JSON response_json_paths: $.errors[0].title: Bad Request - name: post inventory bad syntax schema POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: bad_class total: 2048 status: 400 response_json_paths: $.errors[0].title: Bad Request - name: post inventory bad resource class POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: NO_CLASS_14 total: 2048 status: 400 response_strings: - No such resource class NO_CLASS_14 response_json_paths: $.errors[0].title: Bad Request - name: post inventory duplicated resource class desc: DISK_GB was already created above POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 2048 status: 409 response_strings: - Update conflict response_json_paths: $.errors[0].title: Conflict - name: get list of inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: # set microversion to 1.15 to get timestamp headers openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ response_json_paths: $.resource_provider_generation: 2 $.inventories.DISK_GB.total: 2048 $.inventories.DISK_GB.reserved: 1024 - name: delete the inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB status: 204 - name: get now empty inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories response_json_paths: $.resource_provider_generation: 3 $.inventories: {} - name: post new disk inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 1024 status: 201 - name: post new ipv4 address inventory POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: IPV4_ADDRESS total: 255 reserved: 2 status: 201 - name: list both those inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json response_json_paths: $.resource_provider_generation: 5 $.inventories.DISK_GB.total: 1024 $.inventories.IPV4_ADDRESS.total: 255 - name: post ipv4 address inventory again POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: IPV4_ADDRESS total: 255 reserved: 2 status: 409 response_json_paths: $.errors[0].title: Conflict - name: delete inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories/IPV4_ADDRESS status: 204 response_forbidden_headers: - content-type - name: delete inventory again DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories/IPV4_ADDRESS status: 404 response_strings: - No inventory of class IPV4_ADDRESS found for delete response_json_paths: $.errors[0].title: Not Found - name: get missing inventory class GET: /resource_providers/$ENVIRON['RP_UUID']/inventories/IPV4_ADDRESS status: 404 response_strings: - No inventory of class IPV4_ADDRESS for $ENVIRON['RP_UUID'] response_json_paths: $.errors[0].title: Not Found - name: get invalid inventory class GET: /resource_providers/$ENVIRON['RP_UUID']/inventories/HOUSE status: 404 response_strings: - No inventory of class HOUSE for $ENVIRON['RP_UUID'] response_json_paths: $.errors[0].title: Not Found - name: get missing resource provider inventory GET: /resource_providers/2e1dda56-8b18-4fb9-8c5c-3125891b7143/inventories/VCPU status: 404 - name: create another resource provider POST: /resource_providers request_headers: content-type: application/json data: name: disk-network status: 201 - name: put all inventory PUT: $LOCATION/inventories request_headers: content-type: application/json # set microversion to 1.15 to get timestamp headers openstack-api-version: placement 1.15 data: resource_provider_generation: 0 inventories: IPV4_ADDRESS: total: 253 DISK_GB: total: 1024 status: 200 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ response_json_paths: $.resource_provider_generation: 1 $.inventories.IPV4_ADDRESS.total: 253 $.inventories.IPV4_ADDRESS.reserved: 0 $.inventories.DISK_GB.total: 1024 $.inventories.DISK_GB.allocation_ratio: 1.0 - name: check both inventory classes GET: $LAST_URL response_json_paths: $.resource_provider_generation: 1 $.inventories.DISK_GB.total: 1024 $.inventories.IPV4_ADDRESS.total: 253 - name: check one inventory class GET: $LAST_URL/DISK_GB response_json_paths: $.total: 1024 - name: put all inventory bad generation PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json openstack-api-version: placement 1.23 data: resource_provider_generation: 99 inventories: IPV4_ADDRESS: total: 253 status: 409 response_strings: - resource provider generation conflict response_json_paths: $.errors[0].title: Conflict $.errors[0].code: placement.concurrent_update - name: put all inventory unknown resource class PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_provider_generation: 6 inventories: HOUSE: total: 253 status: 400 response_strings: - Unknown resource class in inventory response_json_paths: $.errors[0].title: Bad Request - name: post an inventory with total exceed max limit POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 2147483648 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 400 response_strings: - "Failed validating 'maximum'" response_json_paths: $.errors[0].title: Bad Request - name: post an inventory with reserved exceed max limit POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 1024 reserved: 2147483648 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 400 response_strings: - "Failed validating 'maximum'" response_json_paths: $.errors[0].title: Bad Request - name: post an inventory with min_unit exceed max limit POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 1024 reserved: 512 min_unit: 2147483648 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 400 response_strings: - "Failed validating 'maximum'" response_json_paths: $.errors[0].title: Bad Request - name: post an inventory with max_unit exceed max limit POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 1024 reserved: 512 min_unit: 10 max_unit: 2147483648 step_size: 10 allocation_ratio: 1.0 status: 400 response_strings: - "Failed validating 'maximum'" response_json_paths: $.errors[0].title: Bad Request - name: post an inventory with step_size exceed max limit POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 1024 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 2147483648 allocation_ratio: 1.0 status: 400 response_strings: - "Failed validating 'maximum'" response_json_paths: $.errors[0].title: Bad Request - name: post an inventory with allocation_ratio exceed max limit POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 1024 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 3.40282e+39 status: 400 response_strings: - "Failed validating 'maximum'" response_json_paths: $.errors[0].title: Bad Request - name: modify the inventory with total exceed max limit PUT: $LAST_URL request_headers: content-type: application/json data: resource_provider_generation: 1 inventories: DISK_GB: total: 2147483648 reserved: 512 status: 400 response_strings: - "Failed validating 'maximum'" response_json_paths: $.errors[0].title: Bad Request - name: modify the inventory with allocation_ratio exceed max limit PUT: $LAST_URL request_headers: content-type: application/json data: resource_provider_generation: 1 inventories: DISK_GB: total: 1024 reserved: 512 allocation_ratio: 3.40282e+39 status: 400 response_strings: - "Failed validating 'maximum'" response_json_paths: $.errors[0].title: Bad Request # NOTE(cdent): The generation is 6 now, based on the activity at # the start of this file. - name: put all inventory bad capacity PUT: $LAST_URL request_headers: content-type: application/json data: resource_provider_generation: 6 inventories: IPV4_ADDRESS: total: 253 reserved: 512 status: 400 response_strings: - Unable to update inventory - greater than or equal to total response_json_paths: $.errors[0].title: Bad Request - name: put all inventory zero capacity old microversion PUT: $LAST_URL request_headers: content-type: application/json data: resource_provider_generation: 6 inventories: IPV4_ADDRESS: total: 253 reserved: 253 status: 400 response_strings: - Unable to update inventory - greater than or equal to total response_json_paths: $.errors[0].title: Bad Request - name: put inventory with reserved equal to total PUT: $LAST_URL request_headers: content-type: application/json openstack-api-version: placement 1.26 data: resource_provider_generation: 6 inventories: IPV4_ADDRESS: total: 253 reserved: 253 status: 200 - name: put all inventory bad capacity in new microversion PUT: $LAST_URL request_headers: content-type: application/json openstack-api-version: placement 1.26 data: resource_provider_generation: 7 inventories: IPV4_ADDRESS: total: 253 reserved: 512 status: 400 response_strings: - Unable to update inventory - greater than total response_json_paths: $.errors[0].title: Bad Request - name: put one inventory zero capacity old microversion PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories/IPV4_ADDRESS request_headers: content-type: application/json data: resource_provider_generation: 7 total: 253 reserved: 253 status: 400 response_strings: - Unable to update inventory - greater than or equal to total response_json_paths: $.errors[0].title: Bad Request - name: put one inventory with reserved equal to total new microversion PUT: $LAST_URL request_headers: content-type: application/json openstack-api-version: placement 1.26 data: resource_provider_generation: 7 total: 512 reserved: 512 status: 200 - name: delete all inventory bad generation PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_provider_generation: 99 inventories: IPV4_ADDRESS: total: 253 status: 409 response_strings: - resource provider generation conflict - name: delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: openstack-api-version: placement 1.5 status: 204 - name: delete empty inventories DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: openstack-api-version: placement 1.5 status: 204 - name: get inventories after deletions GET: /resource_providers/$ENVIRON['RP_UUID']/inventories response_json_paths: $.resource_provider_generation: 10 $.inventories: {} - name: post an inventory again POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 response_headers: location: $SCHEME://$NETLOC/resource_providers/$ENVIRON['RP_UUID']/inventories/DISK_GB response_json_paths: $.resource_provider_generation: 11 $.total: 2048 $.reserved: 512 - name: delete all inventory with put PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: content-type: application/json openstack-api-version: placement 1.4 data: resource_provider_generation: 11 inventories: {} response_json_paths: $.resource_provider_generation: 12 $.inventories: {} status: 200 - name: get generation after deletion GET: /resource_providers/$ENVIRON['RP_UUID']/inventories response_json_paths: $.resource_provider_generation: 12 $.inventories: {} - name: delete inventories earlier version DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: openstack-api-version: placement 1.4 status: 405 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/microversion-bug-1724065.yaml0000664000175000017500000000076600000000000031541 0ustar00zuulzuul00000000000000# Test launchpad bug https://bugs.launchpad.net/nova/+bug/1724065 fixtures: - APIFixture defaults: request_headers: x-auth-token: user tests: # min version from start of placement time is 1.0 # Without the fix, this results in a 500 with an 'HTTP_ACCEPT' # KeyError. - name: no accept header and out of range microversion GET: /resource_providers request_headers: openstack-api-version: placement 0.9 status: 406 response_strings: - Unacceptable version header ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/microversion.yaml0000664000175000017500000000421200000000000030126 0ustar00zuulzuul00000000000000# Tests to build microversion functionality behavior and confirm # it is present and behaving as expected. fixtures: - APIFixture defaults: request_headers: accept: application/json x-auth-token: user tests: - name: root has microversion header GET: / response_headers: vary: /openstack-api-version/ openstack-api-version: /^placement \d+\.\d+$/ - name: root has microversion info GET: / response_json_paths: $.versions[0].max_version: /^\d+\.\d+$/ $.versions[0].min_version: /^\d+\.\d+$/ $.versions[0].id: v1.0 $.versions[0].status: CURRENT $.versions[0].links[?rel = 'self'].href: '' - name: unavailable microversion raises 406 GET: / request_headers: openstack-api-version: placement 0.5 status: 406 response_headers: content-type: /application/json/ response_strings: - "Unacceptable version header: 0.5" response_json_paths: $.errors[0].title: Not Acceptable - name: latest microversion is 1.39 GET: / request_headers: openstack-api-version: placement latest response_headers: vary: /openstack-api-version/ openstack-api-version: placement 1.39 - name: other accept header bad version GET: / request_headers: accept: text/html openstack-api-version: placement 0.5 status: 406 response_headers: content-type: /text/html/ response_strings: - "Unacceptable version header: 0.5" - name: bad format string raises 400 GET: / request_headers: openstack-api-version: placement pony.horse status: 400 response_strings: - "invalid version string: pony.horse" response_json_paths: $.errors[0].title: Bad Request - name: bad format multidot raises 400 GET: / request_headers: openstack-api-version: placement 1.2.3 status: 400 response_strings: - "invalid version string: 1.2.3" response_json_paths: $.errors[0].title: Bad Request - name: error in application produces microversion headers desc: we do not want xml POST: / request_headers: content-type: application/xml status: 405 response_headers: openstack-api-version: placement 1.0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/non-cors.yaml0000664000175000017500000000104600000000000027147 0ustar00zuulzuul00000000000000# Confirm that things work as intended when CORS is not configured. fixtures: - APIFixture defaults: request_headers: x-auth-token: user tests: - name: options request not allowed OPTIONS: / request_headers: origin: http://valid.example.com access-control-request-method: GET status: 405 - name: get request no cors headers GET: / request_headers: origin: http://valid.example.com access-control-request-method: GET status: 200 response_forbidden_headers: - access-control-allow-origin ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/reshaper-legacy-rbac.yaml0000664000175000017500000000432700000000000031376 0ustar00zuulzuul00000000000000--- fixtures: - LegacyRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: create parent resource provider POST: /resource_providers request_headers: *project_admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: create inventory for the parent resource provider POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *project_admin_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 - name: create a child provider POST: /resource_providers request_headers: *project_admin_headers data: uuid: 04914444-41ae-4ff3-ab56-ded01552cd1e name: 636f2798-9599-4371-a3ed-e7b2128aef97 parent_provider_uuid: $ENVIRON['RP_UUID'] status: 200 - name: project member cannot reshape POST: /reshaper request_headers: *project_member_headers data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 1 inventories: [] 04914444-41ae-4ff3-ab56-ded01552cd1e: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 allocations: {} status: 403 - name: project admin can reshape POST: /reshaper request_headers: *project_admin_headers data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 1 inventories: {} 04914444-41ae-4ff3-ab56-ded01552cd1e: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 allocations: {} status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/reshaper-policy.yaml0000664000175000017500000000102200000000000030511 0ustar00zuulzuul00000000000000# This tests POSTs to /reshaper using a non-admin user with an open policy # configuration. The response is a 400 because of bad content, meaning we got # past policy enforcement. If policy was being enforced we'd get a 403. fixtures: - OpenPolicyFixture defaults: request_headers: x-auth-token: user accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: attempt reshape POST: /reshaper data: bad: content status: 400 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/reshaper-secure-rbac.yaml0000664000175000017500000001315600000000000031420 0ustar00zuulzuul00000000000000--- fixtures: - SecureRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &admin_project_id $ENVIRON['ADMIN_PROJECT_ID'] - &service_project_id $ENVIRON['SERVICE_PROJECT_ID'] - &admin_headers x-auth-token: user x-roles: admin x-project-id: admin_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &service_headers x-auth-token: user x-roles: service x-project-id: service_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: create parent resource provider POST: /resource_providers request_headers: *admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: create inventory for the parent resource provider POST: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: *admin_headers data: resource_class: DISK_GB total: 2048 reserved: 512 min_unit: 10 max_unit: 1024 step_size: 10 allocation_ratio: 1.0 status: 201 - name: create a child provider POST: /resource_providers request_headers: *admin_headers data: uuid: 04914444-41ae-4ff3-ab56-ded01552cd1e name: 636f2798-9599-4371-a3ed-e7b2128aef97 parent_provider_uuid: $ENVIRON['RP_UUID'] status: 200 - name: project reader cannot reshape POST: /reshaper request_headers: *project_reader_headers data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 1 inventories: [] 04914444-41ae-4ff3-ab56-ded01552cd1e: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 allocations: {} status: 403 - name: project member cannot reshape POST: /reshaper request_headers: *project_member_headers data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 1 inventories: [] 04914444-41ae-4ff3-ab56-ded01552cd1e: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 allocations: {} status: 403 - name: project admin cannot reshape POST: /reshaper request_headers: *project_admin_headers data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 1 inventories: {} 04914444-41ae-4ff3-ab56-ded01552cd1e: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 allocations: {} status: 403 - name: system reader cannot reshape POST: /reshaper request_headers: *system_reader_headers data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 1 inventories: [] 04914444-41ae-4ff3-ab56-ded01552cd1e: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 allocations: {} status: 403 - name: system admin cannot reshape POST: /reshaper request_headers: *system_admin_headers data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 1 inventories: {} 04914444-41ae-4ff3-ab56-ded01552cd1e: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 allocations: {} status: 403 - name: admin cannot reshape POST: /reshaper request_headers: *admin_headers data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 1 inventories: {} 04914444-41ae-4ff3-ab56-ded01552cd1e: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 allocations: {} status: 403 - name: service can reshape POST: /reshaper request_headers: *service_headers data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 1 inventories: {} 04914444-41ae-4ff3-ab56-ded01552cd1e: resource_provider_generation: 0 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 allocations: {} status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/reshaper.yaml0000664000175000017500000005474400000000000027237 0ustar00zuulzuul00000000000000# /reshaper provides a way to atomically move inventory and allocations from # one resource provider to another, often from a root provider to a new child. fixtures: - AllocationFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement 1.30 tests: - name: reshaper is POST only GET: /reshaper status: 405 response_headers: allow: POST - name: reshaper requires admin not user POST: /reshaper request_headers: x-auth-token: user status: 403 - name: reshaper not there old POST: /reshaper request_headers: openstack-api-version: placement 1.29 status: 404 - name: very invalid 400 POST: /reshaper status: 400 data: cows: moo response_strings: - JSON does not validate - name: missing allocations POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 0 inventories: VCPU: total: 1 status: 400 # There are existing allocations on RP_UUID (created by the AllocationFixture). # As the code is currently we cannot null out those allocations from reshaper # because the allocations identify nothing (replace_all() is a no op). - name: empty allocations inv in use POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 5 inventories: VCPU: total: 1 allocations: {} status: 409 response_json_paths: $.errors[0].code: placement.inventory.inuse # Again, with the existing allocations on RP_UUID being held by CONSUMER_ID, # not INSTANCE_ID, when we try to allocate here, we don't have room. This # is a correctly invalid operation as to be actually reshaping here, we # would be needing to move the CONSUMER_ID allocations in this call (and # setting the inventory to something that could accommodate them). - name: with allocations POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 5 inventories: VCPU: total: 1 allocations: $ENVIRON['INSTANCE_UUID']: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 consumer_generation: null project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 409 response_strings: - Unable to allocate inventory - name: bad rp gen POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 4 inventories: VCPU: total: 1 allocations: {} status: 409 response_strings: - resource provider generation conflict - 'actual: 5, given: 4' - name: bad consumer gen POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 5 inventories: VCPU: total: 1 allocations: $ENVIRON['INSTANCE_UUID']: allocations: $ENVIRON['RP_UUID']: resources: VCPU: 1 # The correct generation here is null, because INSTANCE_UUID # represents a new consumer at this point. consumer_generation: 99 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] status: 409 response_strings: - consumer generation conflict - name: create a child provider POST: /resource_providers data: uuid: $ENVIRON['ALT_RP_UUID'] name: $ENVIRON['ALT_RP_NAME'] parent_provider_uuid: $ENVIRON['RP_UUID'] # This and subsequent error checking tests are modelled on the successful # test which is at the end of this file. Using the same data, with minor # adjustments, so that the cause of failure is clear. - name: move to bad child 400 POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 5 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 # This resource provider does not exist. '39bafc00-3fff-444d-b87a-2ead3f866e05': resource_provider_generation: 0 inventories: VCPU: total: 10 max_unit: 8 # these consumer generations are all 1 because they have # previously allocated allocations: $ENVIRON['CONSUMER_0']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 1000 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['CONSUMER_ID']: allocations: $ENVIRON['ALT_RP_UUID']: resources: VCPU: 8 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['ALT_CONSUMER_ID']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 $ENVIRON['ALT_RP_UUID']: resources: VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 1 status: 400 response_json_paths: $.errors[0].code: placement.resource_provider.not_found - name: poorly formed inventory 400 POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 5 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 bad_field: moo $ENVIRON['ALT_RP_UUID']: resource_provider_generation: 0 inventories: VCPU: total: 10 max_unit: 8 # these consumer generations are all 1 because they have # previously allocated allocations: $ENVIRON['CONSUMER_0']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 1000 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['CONSUMER_ID']: allocations: $ENVIRON['ALT_RP_UUID']: resources: VCPU: 8 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['ALT_CONSUMER_ID']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 $ENVIRON['ALT_RP_UUID']: resources: VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 1 status: 400 response_strings: - JSON does not validate - "'bad_field' was unexpected" - name: poorly formed allocation 400 POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 5 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 $ENVIRON['ALT_RP_UUID']: resource_provider_generation: 0 inventories: VCPU: total: 10 max_unit: 8 # these consumer generations are all 1 because they have # previously allocated allocations: $ENVIRON['CONSUMER_0']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 1000 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 # This bad field will cause a failure in the schema. bad_field: moo $ENVIRON['CONSUMER_ID']: allocations: $ENVIRON['ALT_RP_UUID']: resources: VCPU: 8 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['ALT_CONSUMER_ID']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 $ENVIRON['ALT_RP_UUID']: resources: VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 1 status: 400 response_strings: - JSON does not validate - "'bad_field' was unexpected" - name: target resource class not found POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 5 inventories: # not a real inventory, but valid form DISK_OF_STEEL: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 $ENVIRON['ALT_RP_UUID']: resource_provider_generation: 0 inventories: VCPU: total: 10 max_unit: 8 # these consumer generations are all 1 because they have # previously allocated allocations: $ENVIRON['CONSUMER_0']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 1000 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['CONSUMER_ID']: allocations: $ENVIRON['ALT_RP_UUID']: resources: VCPU: 8 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['ALT_CONSUMER_ID']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 $ENVIRON['ALT_RP_UUID']: resources: VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 1 status: 400 response_strings: - No such resource class DISK_OF_STEEL - name: move bad allocation 409 desc: max unit on disk gb inventory violated POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 5 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 600 $ENVIRON['ALT_RP_UUID']: resource_provider_generation: 0 inventories: VCPU: total: 10 max_unit: 8 # these consumer generations are all 1 because they have # previously allocated allocations: $ENVIRON['CONSUMER_0']: allocations: $ENVIRON['RP_UUID']: resources: # Violates max unit DISK_GB: 1000 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['CONSUMER_ID']: allocations: $ENVIRON['ALT_RP_UUID']: resources: VCPU: 8 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['ALT_CONSUMER_ID']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 $ENVIRON['ALT_RP_UUID']: resources: VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 1 status: 409 response_strings: - Unable to allocate inventory # This is a successful reshape using information as it was established above # or in the AllocationFixture. A non-obvious fact of this test is that it # confirms that resource provider and consumer generations are rolled back # when failures occur, as in the tests above. - name: move vcpu inventory and allocations to child POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 5 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 $ENVIRON['ALT_RP_UUID']: resource_provider_generation: 0 inventories: VCPU: total: 10 max_unit: 8 # these consumer generations are all 1 because they have # previously allocated allocations: $ENVIRON['CONSUMER_0']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 1000 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['CONSUMER_ID']: allocations: $ENVIRON['ALT_RP_UUID']: resources: VCPU: 8 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 $ENVIRON['ALT_CONSUMER_ID']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 $ENVIRON['ALT_RP_UUID']: resources: VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 1 status: 204 - name: get usages on parent after move GET: /resource_providers/$ENVIRON['RP_UUID']/usages response_json_paths: $.usages: DISK_GB: 1020 $.resource_provider_generation: 8 - name: get usages on child after move GET: /resource_providers/$ENVIRON['ALT_RP_UUID']/usages response_json_paths: $.usages: VCPU: 9 $.resource_provider_generation: 3 # Now move some of the inventory back to the original provider, and put all # the allocations under two new consumers. This is an artificial test to # exercise new consumer creation. - name: consolidate inventory and allocations POST: /reshaper data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 8 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 VCPU: total: 10 max_unit: 8 $ENVIRON['ALT_RP_UUID']: resource_provider_generation: 3 inventories: {} allocations: $ENVIRON['CONSUMER_0']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 1000 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 2 '7bd2e864-0415-445c-8fc2-328520ef7642': allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null '2dfa608c-cecb-4fe0-a1bb-950015fa731f': allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: null $ENVIRON['CONSUMER_ID']: allocations: {} project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 2 $ENVIRON['ALT_CONSUMER_ID']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 2 status: 204 - name: get usages on parent after move back GET: /resource_providers/$ENVIRON['RP_UUID']/usages response_json_paths: $.usages: VCPU: 9 DISK_GB: 1040 $.resource_provider_generation: 11 - name: get usages on child after move back GET: /resource_providers/$ENVIRON['ALT_RP_UUID']/usages response_json_paths: $.usages: {} $.resource_provider_generation: 5 # At microversion 1.34 we accept a mappings key with allocations. - name: reshape with mappings POST: /reshaper request_headers: openstack-api-version: placement 1.34 data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 11 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 VCPU: total: 10 max_unit: 8 $ENVIRON['ALT_RP_UUID']: resource_provider_generation: 5 inventories: {} allocations: $ENVIRON['CONSUMER_0']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 1000 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 3 mappings: '': - $ENVIRON['RP_UUID'] '7bd2e864-0415-445c-8fc2-328520ef7642': allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 1 '2dfa608c-cecb-4fe0-a1bb-950015fa731f': allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 1 $ENVIRON['CONSUMER_ID']: allocations: {} project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null $ENVIRON['ALT_CONSUMER_ID']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 3 status: 204 - name: reshape with mappings wrong microversion POST: /reshaper request_headers: openstack-api-version: placement 1.33 data: inventories: $ENVIRON['RP_UUID']: resource_provider_generation: 8 inventories: DISK_GB: total: 2048 step_size: 10 min_unit: 10 max_unit: 1200 VCPU: total: 10 max_unit: 8 $ENVIRON['ALT_RP_UUID']: resource_provider_generation: 3 inventories: {} allocations: $ENVIRON['CONSUMER_0']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 1000 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 2 mappings: '': - $ENVIRON['RP_UUID'] '7bd2e864-0415-445c-8fc2-328520ef7642': allocations: $ENVIRON['RP_UUID']: resources: VCPU: 8 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: null '2dfa608c-cecb-4fe0-a1bb-950015fa731f': allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 VCPU: 1 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: null $ENVIRON['CONSUMER_ID']: allocations: {} project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['USER_ID'] consumer_generation: 2 $ENVIRON['ALT_CONSUMER_ID']: allocations: $ENVIRON['RP_UUID']: resources: DISK_GB: 20 project_id: $ENVIRON['PROJECT_ID'] user_id: $ENVIRON['ALT_USER_ID'] consumer_generation: 2 status: 400 response_json_paths: $.errors[0].detail: /Additional properties are not allowed/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-class-in-use.yaml0000664000175000017500000000375400000000000031551 0ustar00zuulzuul00000000000000# A sequence of tests that confirms that a resource class in use # cannot be deleted. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json # We need version 1.11 as the PUT /allocations below is # using the < 1.12 data format. openstack-api-version: placement 1.11 tests: - name: create a resource provider POST: /resource_providers data: name: an rp status: 201 - name: get resource provider GET: $LOCATION status: 200 - name: create a resource class PUT: /resource_classes/CUSTOM_GOLD status: 201 - name: add inventory to an rp PUT: /resource_providers/$HISTORY['get resource provider'].$RESPONSE['$.uuid']/inventories data: resource_provider_generation: 0 inventories: VCPU: total: 24 CUSTOM_GOLD: total: 5 status: 200 - name: allocate some of it PUT: /allocations/6d9f83db-6eb5-49f6-84b0-5d03c6aa9fc8 data: allocations: - resource_provider: uuid: $HISTORY['get resource provider'].$RESPONSE['$.uuid'] resources: VCPU: 5 CUSTOM_GOLD: 1 project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784 user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70 status: 204 - name: fail delete resource class allocations DELETE: /resource_classes/CUSTOM_GOLD status: 409 response_strings: - Error in delete resource class - Class is in use in inventory - name: delete the allocation DELETE: $HISTORY['allocate some of it'].$URL status: 204 - name: fail delete resource class inventory DELETE: /resource_classes/CUSTOM_GOLD status: 409 response_strings: - Error in delete resource class - Class is in use in inventory - name: delete the inventory DELETE: $HISTORY['add inventory to an rp'].$URL status: 204 - name: delete resource class DELETE: /resource_classes/CUSTOM_GOLD status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-classes-1-6.yaml0000664000175000017500000000077700000000000031206 0ustar00zuulzuul00000000000000# Confirm that 1.7 behavior of PUT resource classes is not in # microversion 1.6. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement 1.6 tests: - name: bodiless put PUT: /resource_classes/CUSTOM_COW status: 400 response_strings: # We don't check much of this string because it is different # between python 2 and 3. - "Malformed JSON:" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-classes-1-7.yaml0000664000175000017500000000253700000000000031203 0ustar00zuulzuul00000000000000fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement 1.7 tests: - name: create new custom class with put PUT: /resource_classes/CUSTOM_COW status: 201 response_headers: location: //resource_classes/CUSTOM_COW/ - name: verify that class with put PUT: /resource_classes/CUSTOM_COW status: 204 response_headers: location: //resource_classes/CUSTOM_COW/ - name: fail to put non custom class PUT: /resource_classes/COW status: 400 response_strings: - "Failed validating 'pattern'" - name: try to put standard class PUT: /resource_classes/VCPU status: 400 response_strings: - "Failed validating 'pattern'" - name: try to put too long class PUT: /resource_classes/CUSTOM_SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS status: 400 response_strings: - "Failed validating 'maxLength'" - name: post to create still works POST: /resource_classes data: name: CUSTOM_SHEEP status: 201 response_headers: location: //resource_classes/CUSTOM_SHEEP/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-classes-last-modified.yaml0000664000175000017500000000634000000000000033414 0ustar00zuulzuul00000000000000# Confirm the behavior and presence of last-modified headers for resource # classes across multiple microversions. # # We have the following routes, with associated microversion, and bodies. # # '/resource_classes': { # 'GET': resource_class.list_resource_classes, # v1.2, body # 'POST': resource_class.create_resource_class # v1.2, no body # }, # '/resource_classes/{name}': { # 'GET': resource_class.get_resource_class, # v1.2, body # 'PUT': resource_class.update_resource_class, # v1.2, body, but time's arrow # v1.7, no body # 'DELETE': resource_class.delete_resource_class, # v1.2, no body # }, # # This means that in 1.15 we only expect last-modified headers for # the two GET requests, for the other requests we should confirm it # is not there. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json openstack-api-version: placement 1.15 tests: - name: get resource classes desc: last modified is now with standards only GET: /resource_classes response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: create a custom class PUT: /resource_classes/CUSTOM_MOO_MACHINE status: 201 response_forbidden_headers: - last-modified - cache-control - name: get custom class GET: $LAST_URL response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: get standard class GET: /resource_classes/VCPU response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: post a resource class POST: /resource_classes data: name: CUSTOM_ALPHA status: 201 response_forbidden_headers: - last-modified - cache-control - name: get resource classes including custom desc: last modified will still be now with customs because of standards GET: /resource_classes response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: put a resource class 1.6 microversion PUT: /resource_classes/CUSTOM_MOO_MACHINE request_headers: openstack-api-version: placement 1.6 data: name: CUSTOM_BETA status: 200 response_forbidden_headers: - last-modified - cache-control - name: get resource classes 1.14 microversion GET: /resource_classes request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - last-modified - cache-control - name: get standard class 1.14 microversion GET: /resource_classes/VCPU request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - last-modified - cache-control - name: get custom class 1.14 microversion GET: $LAST_URL request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - last-modified - cache-control ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-classes-legacy-rbac.yaml0000664000175000017500000000440100000000000033040 0ustar00zuulzuul00000000000000--- fixtures: - LegacyRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: project member cannot list resource classes GET: /resource_classes request_headers: *project_member_headers status: 403 - name: project admin can list resource classes GET: /resource_classes request_headers: *project_admin_headers response_json_paths: $.resource_classes.`len`: 21 # Number of standard resource classes - name: project member cannot create resource classes POST: /resource_classes request_headers: *project_member_headers data: name: CUSTOM_RES_CLASS_POLICY status: 403 - name: project admin can create resource classes POST: /resource_classes request_headers: *project_admin_headers data: name: CUSTOM_RES_CLASS_POLICY status: 201 response_headers: location: //resource_classes/CUSTOM_RES_CLASS_POLICY/ - name: project member cannot show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY request_headers: *project_member_headers status: 403 - name: project admin can show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY request_headers: *project_admin_headers response_json_paths: $.name: CUSTOM_RES_CLASS_POLICY - name: project member cannot update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *project_member_headers status: 403 - name: project admin cannot update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *project_admin_headers status: 201 - name: project member cannot delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *project_member_headers status: 403 - name: project admin cannot delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *project_admin_headers status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-classes-policy.yaml0000664000175000017500000000206500000000000032172 0ustar00zuulzuul00000000000000# This tests the individual CRUD operations on /resource_classes # using a non-admin user with an open policy configuration. The # response validation is intentionally minimal. fixtures: - OpenPolicyFixture defaults: request_headers: x-auth-token: user accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: list resource classes GET: /resource_classes response_json_paths: $.resource_classes.`len`: 21 # Number of standard resource classes - name: create resource class POST: /resource_classes data: name: CUSTOM_RES_CLASS_POLICY status: 201 response_headers: location: //resource_classes/CUSTOM_RES_CLASS_POLICY/ - name: show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY response_json_paths: $.name: CUSTOM_RES_CLASS_POLICY - name: update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY status: 201 - name: delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-classes-secure-rbac.yaml0000664000175000017500000001670700000000000033076 0ustar00zuulzuul00000000000000--- fixtures: - SecureRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &admin_project_id $ENVIRON['ADMIN_PROJECT_ID'] - &service_project_id $ENVIRON['SERVICE_PROJECT_ID'] - &admin_headers x-auth-token: user x-roles: admin x-project-id: admin_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &service_headers x-auth-token: user x-roles: service x-project-id: service_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: project admin can list resource classes GET: /resource_classes request_headers: *project_admin_headers response_json_paths: $.resource_classes.`len`: 21 # Number of standard resource classes - name: project member cannot list resource classes GET: /resource_classes request_headers: *project_member_headers status: 403 - name: project reader cannot list resource classes GET: /resource_classes request_headers: *project_reader_headers status: 403 - name: system reader cannot list resource classes GET: /resource_classes request_headers: *system_reader_headers status: 403 - name: system admin cannot list resource classes GET: /resource_classes request_headers: *system_admin_headers status: 403 - name: admin can list resource classes GET: /resource_classes request_headers: *admin_headers response_json_paths: $.resource_classes.`len`: 21 # Number of standard resource classes - name: service can list resource classes GET: /resource_classes request_headers: *service_headers response_json_paths: $.resource_classes.`len`: 21 # Number of standard resource classes - name: admin can create resource classes POST: /resource_classes request_headers: *admin_headers data: name: CUSTOM_RES_CLASS_POLICY status: 201 response_headers: location: //resource_classes/CUSTOM_RES_CLASS_POLICY/ - name: service can create resource classes POST: /resource_classes request_headers: *service_headers data: name: CUSTOM_RES_CLASS_POLICY1 status: 201 response_headers: location: //resource_classes/CUSTOM_RES_CLASS_POLICY1/ - name: project admin can create resource classes POST: /resource_classes request_headers: *project_admin_headers data: name: CUSTOM_RES_CLASS_POLICY2 status: 201 response_headers: location: //resource_classes/CUSTOM_RES_CLASS_POLICY2/ - name: project member cannot create resource classes POST: /resource_classes request_headers: *project_member_headers data: name: CUSTOM_RES_CLASS_POLICY status: 403 - name: project reader cannot create resource classes POST: /resource_classes request_headers: *project_reader_headers data: name: CUSTOM_RES_CLASS_POLICY status: 403 - name: system reader cannot create resource classes POST: /resource_classes request_headers: *system_reader_headers data: name: CUSTOM_RES_CLASS_POLICY status: 403 - name: system admin cannot create resource classes POST: /resource_classes request_headers: *system_admin_headers data: name: CUSTOM_RES_CLASS_POLICY status: 403 - name: project admin can show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY request_headers: *project_admin_headers response_json_paths: $.name: CUSTOM_RES_CLASS_POLICY - name: project member cannot show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY request_headers: *project_member_headers status: 403 - name: project reader cannot show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY request_headers: *project_reader_headers status: 403 - name: system reader cannot show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY request_headers: *system_reader_headers status: 403 - name: system admin cannot show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY request_headers: *system_admin_headers status: 403 - name: admin can show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY request_headers: *admin_headers response_json_paths: $.name: CUSTOM_RES_CLASS_POLICY - name: service can show resource class GET: /resource_classes/CUSTOM_RES_CLASS_POLICY request_headers: *service_headers response_json_paths: $.name: CUSTOM_RES_CLASS_POLICY - name: project admin can update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY2 request_headers: *project_admin_headers status: 201 - name: admin can update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *admin_headers status: 201 - name: service can update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY1 request_headers: *service_headers status: 201 - name: project member cannot update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *project_member_headers status: 403 - name: project reader cannot update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *project_reader_headers status: 403 - name: system reader cannot update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *system_reader_headers status: 403 - name: system admin cannot update resource class PUT: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *system_admin_headers status: 403 - name: project admin can delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY2 request_headers: *project_admin_headers status: 204 - name: project member cannot delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *project_member_headers status: 403 - name: project reader cannot delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *project_reader_headers status: 403 - name: system reader cannot delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *system_reader_headers status: 403 - name: system admin cannot delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *system_admin_headers status: 403 - name: admin can delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY request_headers: *admin_headers status: 204 - name: service can delete resource class DELETE: /resource_classes/CUSTOM_NEW_CLASS_POLICY1 request_headers: *service_headers status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-classes.yaml0000664000175000017500000002132600000000000030676 0ustar00zuulzuul00000000000000fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement latest tests: - name: test microversion masks resource-classes endpoint for list with 404 GET: /resource_classes request_headers: openstack-api-version: placement 1.1 status: 404 response_json_paths: $.errors[0].title: Not Found - name: test microversion masks resource-classes endpoint for create with 404 desc: we want to get a 404 even if content-type is correct POST: /resource_classes request_headers: openstack-api-version: placement 1.1 content-type: application/json data: name: CUSTOM_NFV_BAR status: 404 response_json_paths: $.errors[0].title: Not Found - name: test microversion mask when wrong content type desc: we want to get a 404 before a 415 POST: /resource_classes request_headers: openstack-api-version: placement 1.1 content-type: text/plain data: data status: 404 - name: test wrong content type desc: we want to get a 415 when bad content type POST: /resource_classes request_headers: openstack-api-version: placement 1.2 content-type: text/plain data: data status: 415 - name: non admin forbidden GET: /resource_classes request_headers: x-auth-token: user accept: application/json status: 403 response_json_paths: $.errors[0].title: Forbidden - name: post invalid non json POST: /resource_classes request_headers: accept: text/plain content-type: application/json data: name: FOO status: 400 response_strings: - JSON does not validate - name: post illegal characters in name POST: /resource_classes request_headers: content-type: application/json data: name: CUSTOM_Illegal&@!Name? status: 400 response_strings: - JSON does not validate response_json_paths: $.errors[0].title: Bad Request - name: post new resource class POST: /resource_classes request_headers: content-type: application/json data: name: $ENVIRON['CUSTOM_RES_CLASS'] status: 201 response_headers: location: //resource_classes/$ENVIRON['CUSTOM_RES_CLASS']/ response_forbidden_headers: - content-type - name: try to create same again POST: /resource_classes request_headers: content-type: application/json data: name: $ENVIRON['CUSTOM_RES_CLASS'] status: 409 response_strings: - Conflicting resource class already exists response_json_paths: $.errors[0].title: Conflict - name: confirm the correct post GET: /resource_classes/$ENVIRON['CUSTOM_RES_CLASS'] request_headers: content-type: application/json response_json_paths: $.name: $ENVIRON['CUSTOM_RES_CLASS'] $.links[?rel = "self"].href: /resource_classes/$ENVIRON['CUSTOM_RES_CLASS'] - name: test microversion masks resource-classes endpoint for show with 404 GET: /resource_classes/$ENVIRON['CUSTOM_RES_CLASS'] request_headers: openstack-api-version: placement 1.1 status: 404 response_json_paths: $.errors[0].title: Not Found - name: get resource class works with no accept GET: /resource_classes/$ENVIRON['CUSTOM_RES_CLASS'] request_headers: content-type: application/json response_headers: content-type: /application/json/ response_json_paths: $.name: $ENVIRON['CUSTOM_RES_CLASS'] - name: list resource classes after addition of custom res class GET: /resource_classes response_json_paths: $.resource_classes.`len`: 22 # 21 standard plus 1 custom - name: update standard resource class bad json PUT: /resource_classes/VCPU request_headers: content-type: application/json openstack-api-version: placement 1.6 data: name: VCPU_ALTERNATE status: 400 response_strings: - JSON does not validate response_json_paths: $.errors[0].title: Bad Request - name: update standard resource class to custom desc: standard classes cannot be updated PUT: /resource_classes/VCPU request_headers: content-type: application/json openstack-api-version: placement 1.6 data: name: $ENVIRON['CUSTOM_RES_CLASS'] status: 400 response_strings: - Cannot update standard resource class VCPU response_json_paths: $.errors[0].title: Bad Request - name: update custom resource class to standard resource class name PUT: /resource_classes/$ENVIRON['CUSTOM_RES_CLASS'] request_headers: content-type: application/json openstack-api-version: placement 1.6 data: name: VCPU status: 400 response_strings: - JSON does not validate response_json_paths: $.errors[0].title: Bad Request - name: post another custom resource class POST: /resource_classes request_headers: content-type: application/json data: name: CUSTOM_NFV_FOO status: 201 - name: update custom resource class to already existing custom resource class name PUT: /resource_classes/CUSTOM_NFV_FOO request_headers: content-type: application/json openstack-api-version: placement 1.6 data: name: $ENVIRON['CUSTOM_RES_CLASS'] status: 409 response_strings: - Resource class already exists - $ENVIRON['CUSTOM_RES_CLASS'] response_json_paths: $.errors[0].title: Conflict - name: test microversion masks resource-classes endpoint for update with 404 PUT: /resource_classes/$ENVIRON['CUSTOM_RES_CLASS'] request_headers: openstack-api-version: placement 1.1 content-type: application/json data: name: CUSTOM_NFV_BAR status: 404 response_json_paths: $.errors[0].title: Not Found - name: update custom resource class with additional properties PUT: /resource_classes/$ENVIRON['CUSTOM_RES_CLASS'] request_headers: content-type: application/json openstack-api-version: placement 1.6 data: name: CUSTOM_NFV_BAR additional: additional status: 400 response_strings: - Additional properties are not allowed - name: update custom resource class PUT: /resource_classes/$ENVIRON['CUSTOM_RES_CLASS'] request_headers: content-type: application/json openstack-api-version: placement 1.6 data: name: CUSTOM_NFV_BAR status: 200 response_json_paths: $.name: CUSTOM_NFV_BAR $.links[?rel = "self"].href: /resource_classes/CUSTOM_NFV_BAR - name: delete standard resource class DELETE: /resource_classes/VCPU status: 400 response_strings: - Cannot delete standard resource class response_json_paths: $.errors[0].title: Bad Request - name: test microversion masks resource-classes endpoint for delete with 404 DELETE: /resource_classes/CUSTOM_NFV_BAR request_headers: openstack-api-version: placement 1.1 status: 404 response_json_paths: $.errors[0].title: Not Found - name: delete custom resource class DELETE: /resource_classes/CUSTOM_NFV_BAR status: 204 - name: 404 on deleted resource class DELETE: $LAST_URL status: 404 response_json_paths: $.errors[0].title: Not Found - name: post malformed json as json POST: /resource_classes request_headers: content-type: application/json data: '{"foo": }' status: 400 response_strings: - 'Malformed JSON:' response_json_paths: $.errors[0].title: Bad Request - name: post bad resource class name IRON_NFV POST: /resource_classes request_headers: content-type: application/json data: name: IRON_NFV # Doesn't start with CUSTOM_ status: 400 response_strings: - JSON does not validate response_json_paths: $.errors[0].title: Bad Request - name: try to create resource class with name exceed max characters POST: /resource_classes request_headers: content-type: application/json data: name: &name_exceeds_max_length_check CUSTOM_THIS_IS_A_LONG_TEXT_OF_LENGTH_256_CHARACTERSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS status: 400 response_strings: - "Failed validating 'maxLength'" response_json_paths: $.errors[0].title: Bad Request - name: try to update resource class with name exceed max characters PUT: /resource_classes/$ENVIRON['CUSTOM_RES_CLASS'] request_headers: content-type: application/json openstack-api-version: placement 1.6 data: name: *name_exceeds_max_length_check status: 400 response_strings: - "Failed validating 'maxLength'" response_json_paths: $.errors[0].title: Bad Request - name: try to create resource class with additional properties POST: /resource_classes request_headers: content-type: application/json data: name: CUSTOM_NFV_BAR additional: additional status: 400 response_strings: - Additional properties are not allowed ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider-aggregates.yaml0000664000175000017500000002677200000000000033214 0ustar00zuulzuul00000000000000# Tests filtering resource providers by aggregates fixtures: - APIFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json openstack-api-version: placement latest tests: - name: post new provider 1 POST: /resource_providers data: name: rp_1 uuid: 893337e9-1e55-49f0-bcfe-6a2f16fbf2f7 status: 200 - name: post new provider 2 POST: /resource_providers data: name: rp_2 uuid: 5202c48f-c960-4eec-bde3-89c4f22a17b9 status: 200 - name: post new provider 3 POST: /resource_providers data: name: rp_3 uuid: 0621521c-ad3a-4f9c-9b72-2933788fab19 status: 200 - name: get by aggregates no result GET: '/resource_providers?member_of=in:83a3d69d-8920-48e2-8914-cadfd8fa2f91' response_json_paths: $.resource_providers: [] - name: associate an aggregate with rp1 PUT: /resource_providers/893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/aggregates data: aggregates: - 83a3d69d-8920-48e2-8914-cadfd8fa2f91 resource_provider_generation: 0 status: 200 - name: get by aggregates one result GET: '/resource_providers?member_of=in:83a3d69d-8920-48e2-8914-cadfd8fa2f91' response_json_paths: $.resource_providers[0].uuid: 893337e9-1e55-49f0-bcfe-6a2f16fbf2f7 - name: get by aggregates one result no in GET: '/resource_providers?member_of=83a3d69d-8920-48e2-8914-cadfd8fa2f91' response_json_paths: $.resource_providers[0].uuid: 893337e9-1e55-49f0-bcfe-6a2f16fbf2f7 - name: get by aggregates no result not a uuid GET: '/resource_providers?member_of=not+a+uuid' status: 400 response_strings: - "Expected 'member_of' parameter to contain valid UUID(s)." response_json_paths: $.errors[0].title: Bad Request - name: associate an aggregate with rp2 PUT: /resource_providers/5202c48f-c960-4eec-bde3-89c4f22a17b9/aggregates data: aggregates: - 83a3d69d-8920-48e2-8914-cadfd8fa2f91 resource_provider_generation: 0 status: 200 - name: get by aggregates two result GET: '/resource_providers?member_of=in:83a3d69d-8920-48e2-8914-cadfd8fa2f91' response_json_paths: $.resource_providers.`len`: 2 $.resource_providers[0].uuid: /5202c48f-c960-4eec-bde3-89c4f22a17b9|893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/ $.resource_providers[1].uuid: /5202c48f-c960-4eec-bde3-89c4f22a17b9|893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/ - name: associate another aggregate with rp2 PUT: /resource_providers/5202c48f-c960-4eec-bde3-89c4f22a17b9/aggregates data: aggregates: - 99652f11-9f77-46b9-80b7-4b1989be9f8c resource_provider_generation: 1 status: 200 - name: get by both aggregates two GET: '/resource_providers?member_of=in:83a3d69d-8920-48e2-8914-cadfd8fa2f91,99652f11-9f77-46b9-80b7-4b1989be9f8c' response_json_paths: $.resource_providers.`len`: 2 $.resource_providers[0].uuid: /5202c48f-c960-4eec-bde3-89c4f22a17b9|893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/ $.resource_providers[1].uuid: /5202c48f-c960-4eec-bde3-89c4f22a17b9|893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/ - name: clear aggregates on rp1 PUT: /resource_providers/893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/aggregates data: aggregates: [] resource_provider_generation: 1 status: 200 - name: get by both aggregates one desc: only one result because we disassociated aggregates in the PUT above GET: '/resource_providers?member_of=in:83a3d69d-8920-48e2-8914-cadfd8fa2f91,99652f11-9f77-46b9-80b7-4b1989be9f8c' response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: 5202c48f-c960-4eec-bde3-89c4f22a17b9 - name: error on old microversion GET: '/resource_providers?member_of=in:83a3d69d-8920-48e2-8914-cadfd8fa2f91,99652f11-9f77-46b9-80b7-4b1989be9f8c' request_headers: openstack-api-version: placement 1.1 status: 400 response_strings: - 'Invalid query string parameters' response_json_paths: $.errors[0].title: Bad Request - name: error on bogus query parameter GET: '/resource_providers?assoc_with_aggregate=in:83a3d69d-8920-48e2-8914-cadfd8fa2f91,99652f11-9f77-46b9-80b7-4b1989be9f8c' status: 400 response_strings: - 'Invalid query string parameters' response_json_paths: $.errors[0].title: Bad Request - name: error trying multiple member_of params prior correct microversion GET: '/resource_providers?member_of=83a3d69d-8920-48e2-8914-cadfd8fa2f91&member_of=99652f11-9f77-46b9-80b7-4b1989be9f8c' request_headers: openstack-api-version: placement 1.23 status: 400 response_strings: - 'Multiple member_of parameters are not supported' response_json_paths: $.errors[0].title: Bad Request - name: multiple member_of params with no results GET: '/resource_providers?member_of=83a3d69d-8920-48e2-8914-cadfd8fa2f91&member_of=99652f11-9f77-46b9-80b7-4b1989be9f8c' status: 200 response_json_paths: # No provider is associated with both aggregates resource_providers: [] - name: associate two aggregates with rp2 PUT: /resource_providers/5202c48f-c960-4eec-bde3-89c4f22a17b9/aggregates data: aggregates: - 99652f11-9f77-46b9-80b7-4b1989be9f8c - 83a3d69d-8920-48e2-8914-cadfd8fa2f91 resource_provider_generation: 2 status: 200 - name: multiple member_of params AND together to result in one provider GET: '/resource_providers?member_of=83a3d69d-8920-48e2-8914-cadfd8fa2f91&member_of=99652f11-9f77-46b9-80b7-4b1989be9f8c' status: 200 response_json_paths: # One provider is now associated with both aggregates $.resource_providers.`len`: 1 $.resource_providers[0].uuid: 5202c48f-c960-4eec-bde3-89c4f22a17b9 - name: associate two aggregates to rp1, one of which overlaps with rp2 PUT: /resource_providers/893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/aggregates data: aggregates: - 282d469e-29e2-4a8a-8f2e-31b3202b696a - 83a3d69d-8920-48e2-8914-cadfd8fa2f91 resource_provider_generation: 2 status: 200 - name: two AND'd member_ofs with one OR'd member_of GET: '/resource_providers?member_of=83a3d69d-8920-48e2-8914-cadfd8fa2f91&member_of=in:99652f11-9f77-46b9-80b7-4b1989be9f8c,282d469e-29e2-4a8a-8f2e-31b3202b696a' status: 200 response_json_paths: # Both rp1 and rp2 returned because both are associated with agg 83a3d69d # and each is associated with either agg 99652f11 or agg 282s469e $.resource_providers.`len`: 2 $.resource_providers[0].uuid: /5202c48f-c960-4eec-bde3-89c4f22a17b9|893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/ $.resource_providers[1].uuid: /5202c48f-c960-4eec-bde3-89c4f22a17b9|893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/ - name: two AND'd member_ofs using same agg UUID GET: '/resource_providers?member_of=282d469e-29e2-4a8a-8f2e-31b3202b696a&member_of=282d469e-29e2-4a8a-8f2e-31b3202b696a' status: 200 response_json_paths: # Only rp2 returned since it's the only one associated with the duplicated agg $.resource_providers.`len`: 1 $.resource_providers[0].uuid: /893337e9-1e55-49f0-bcfe-6a2f16fbf2f7/ # Tests for negative aggregate membership from microversion 1.32 # Now the aggregation map is as below # { # 893337e9-1e55-49f0-bcfe-6a2f16fbf2f7 (rp_1): # [83a3d69d-8920-48e2-8914-cadfd8fa2f91, 282d469e-29e2-4a8a-8f2e-31b3202b696a] # 5202c48f-c960-4eec-bde3-89c4f22a17b9 (rp_2) # [83a3d69d-8920-48e2-8914-cadfd8fa2f91, 99652f11-9f77-46b9-80b7-4b1989be9f8c] # 0621521c-ad3a-4f9c-9b72-2933788fab19 (rp_3): # [] # } - name: negative agg error on old microversion with ! prefix GET: /resource_providers?member_of=!282d469e-29e2-4a8a-8f2e-31b3202b696a status: 400 request_headers: openstack-api-version: placement 1.31 response_strings: - "Forbidden member_of parameters are not supported in the specified microversion" - name: negative agg error on old microversion with !in prefix GET: /allocation_candidates?resources=VCPU:1&member_of=!in:282d469e-29e2-4a8a-8f2e-31b3202b696a status: 400 request_headers: openstack-api-version: placement 1.31 response_strings: - "Forbidden member_of parameters are not supported in the specified microversion" - name: negative agg error on invalid agg GET: /resource_providers?member_of=!(^o^) status: 400 request_headers: openstack-api-version: placement 1.32 response_strings: - "Invalid query string parameters: Expected 'member_of' parameter to contain valid UUID(s)." - name: negative agg error on invalid usage of in prefix GET: /resource_providers?resources=VCPU:1&member_of=in:99652f11-9f77-46b9-80b7-4b1989be9f8c,!282d469e-29e2-4a8a-8f2e-31b3202b696a status: 400 request_headers: openstack-api-version: placement 1.32 response_strings: - "Invalid query string parameters: Expected 'member_of' parameter to contain valid UUID(s)." - name: negative agg GET: /resource_providers?member_of=!282d469e-29e2-4a8a-8f2e-31b3202b696a status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # rp_2 is excluded $.resource_providers.`len`: 2 $.resource_providers[0].uuid: /5202c48f-c960-4eec-bde3-89c4f22a17b9|0621521c-ad3a-4f9c-9b72-2933788fab19/ $.resource_providers[1].uuid: /5202c48f-c960-4eec-bde3-89c4f22a17b9|0621521c-ad3a-4f9c-9b72-2933788fab19/ - name: negative agg multiple GET: /resource_providers?member_of=!282d469e-29e2-4a8a-8f2e-31b3202b696a&member_of=!99652f11-9f77-46b9-80b7-4b1989be9f8c status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # Both rp_1 and rp_2 are excluded $.resource_providers.`len`: 1 $.resource_providers[0].uuid: 0621521c-ad3a-4f9c-9b72-2933788fab19 - name: negative agg with in prefix GET: /resource_providers?member_of=!in:282d469e-29e2-4a8a-8f2e-31b3202b696a,99652f11-9f77-46b9-80b7-4b1989be9f8c status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # The same results as above $.resource_providers.`len`: 1 $.resource_providers[0].uuid: 0621521c-ad3a-4f9c-9b72-2933788fab19 - name: negative agg with positive agg GET: /resource_providers?member_of=!282d469e-29e2-4a8a-8f2e-31b3202b696a&member_of=83a3d69d-8920-48e2-8914-cadfd8fa2f91 status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # only rp_2 is returned $.resource_providers.`len`: 1 $.resource_providers[0].uuid: 5202c48f-c960-4eec-bde3-89c4f22a17b9 - name: negative agg multiple with positive agg GET: /resource_providers?member_of=!in:282d469e-29e2-4a8a-8f2e-31b3202b696a,83a3d69d-8920-48e2-8914-cadfd8fa2f91&member_of=99652f11-9f77-46b9-80b7-4b1989be9f8c status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # no rp is returned $.resource_providers.`len`: 0 # This request is equivalent to the one in "negative agg with positive agg" - name: negative agg with the same agg on positive get rp GET: /resource_providers?member_of=!282d469e-29e2-4a8a-8f2e-31b3202b696a&member_of=in:83a3d69d-8920-48e2-8914-cadfd8fa2f91,282d469e-29e2-4a8a-8f2e-31b3202b696a status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: 5202c48f-c960-4eec-bde3-89c4f22a17b9 - name: negative agg with the same agg on positive no rp GET: /resource_providers?member_of=!282d469e-29e2-4a8a-8f2e-31b3202b696a&member_of=282d469e-29e2-4a8a-8f2e-31b3202b696a status: 200 request_headers: openstack-api-version: placement 1.32 response_json_paths: # no rp is returned $.resource_providers.`len`: 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider-any-traits.yaml0000664000175000017500000000332600000000000033164 0ustar00zuulzuul00000000000000fixtures: - GranularFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement latest tests: - name: the 'in:' trait query is not supported yet GET: /resource_providers?required=in:CUSTOM_FOO,HW_CPU_X86_MMX request_headers: openstack-api-version: placement 1.38 status: 400 response_strings: - "The format 'in:HW_CPU_X86_VMX,CUSTOM_MAGIC' only supported since microversion 1.39." - name: the second required field overwrites the first # The fixture has one RP for each trait but no RP for both traits. # As the second 'required' key overwrites the first in <= 1.38 we expect # that one of that RPs will be returned. GET: /resource_providers?required=CUSTOM_FOO&required=HW_CPU_X86_MMX request_headers: openstack-api-version: placement 1.38 status: 200 response_json_paths: $.resource_providers.`len`: 1 - name: list providers with both OR, AND, and NOT trait queries # DXVA or TLS would allow all the RPs, AVX filters that down to the left and # the middle but FOO forbids the left so the middle remains GET: /resource_providers?required=in:HW_GPU_API_DXVA,HW_NIC_ACCEL_TLS&required=HW_CPU_X86_AVX,!CUSTOM_FOO status: 200 response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].name: cn_middle - name: have multiple OR queries # MMX or TLS matches middle and right, SSD or FOO matches left, right and # shr_disk_1. So only right is a total match. GET: /resource_providers?required=in:HW_CPU_X86_MMX,HW_NIC_ACCEL_TLS&required=in:CUSTOM_DISK_SSD,CUSTOM_FOO status: 200 response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].name: cn_right ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider-bug-1779818.yaml0000664000175000017500000001233200000000000032511 0ustar00zuulzuul00000000000000# Test launchpad bug https://bugs.launchpad.net/nova/+bug/1779818 fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement latest tests: - name: post a resource provider as alt_parent POST: /resource_providers request_headers: content-type: application/json data: name: alt_parent uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.name: alt_parent $.parent_provider_uuid: null $.generation: 0 - name: post another resource provider as parent POST: /resource_providers request_headers: content-type: application/json data: name: parent uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['PARENT_PROVIDER_UUID'] $.name: parent $.parent_provider_uuid: null $.generation: 0 - name: post a child resource provider of the parent POST: /resource_providers request_headers: content-type: application/json data: name: child uuid: $ENVIRON['RP_UUID'] parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] $.name: child $.parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] $.generation: 0 # Let's validate that now we have two tree structures # * alt_parent # * parent # | # +-- child - name: list all resource providers GET: /resource_providers response_json_paths: $.resource_providers.`len`: 3 $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].parent_provider_uuid: null $.resource_providers[?uuid="$ENVIRON['PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['PARENT_PROVIDER_UUID']"].parent_provider_uuid: null $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] # Let's re-parent the parent to the alternative parent # so that we have only one tree. # * alt_parent # | # +-- parent # | # +-- child - name: update a parent of the parent PUT: /resource_providers/$ENVIRON['PARENT_PROVIDER_UUID'] request_headers: content-type: application/json data: name: parent parent_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] status: 200 # Let's validate that we have only one root provider now - name: list all resource providers updated GET: /resource_providers response_json_paths: $.resource_providers.`len`: 3 $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].parent_provider_uuid: null $.resource_providers[?uuid="$ENVIRON['PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['PARENT_PROVIDER_UUID']"].parent_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] - name: list all resource providers in a tree with the child GET: /resource_providers?in_tree=$ENVIRON['RP_UUID'] response_json_paths: $.resource_providers.`len`: 3 $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] - name: list all resource providers in a tree with the parent GET: /resource_providers?in_tree=$ENVIRON['PARENT_PROVIDER_UUID'] response_json_paths: $.resource_providers.`len`: 3 $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] - name: list all resource providers in a tree with the alternative parent GET: /resource_providers?in_tree=$ENVIRON['ALT_PARENT_PROVIDER_UUID'] response_json_paths: $.resource_providers.`len`: 3 $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider-duplication.yaml0000664000175000017500000000226400000000000033404 0ustar00zuulzuul00000000000000# Verify different error messages was attempting to create a # resource provider with a duplicated name or UUID. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json content-type: application/json tests: - name: post new resource provider POST: /resource_providers data: name: shared disk uuid: $ENVIRON['RP_UUID'] status: 201 - name: same uuid different name POST: /resource_providers data: name: shared disk X uuid: $ENVIRON['RP_UUID'] status: 409 response_strings: - "Conflicting resource provider uuid: $ENVIRON['RP_UUID']" - name: same name different uuid POST: /resource_providers data: name: shared disk uuid: 2c2059d8-005c-4f5c-82b1-b1701b1a29b7 status: 409 response_strings: - 'Conflicting resource provider name: shared disk' # On this one, don't test for which field was a duplicate because # that depends on how the database reports columns. - name: same name same uuid POST: /resource_providers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 409 response_strings: - Conflicting resource provider ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider-legacy-rbac.yaml0000664000175000017500000001334700000000000033246 0ustar00zuulzuul00000000000000--- fixtures: - LegacyRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: system admin can list resource providers GET: /resource_providers request_headers: *system_admin_headers response_json_paths: $.resource_providers: [] - name: system reader cannot list resource providers GET: /resource_providers request_headers: *system_reader_headers status: 403 - name: project admin can list resource providers GET: /resource_providers request_headers: *project_admin_headers response_json_paths: $.resource_providers: [] - name: project member cannot list resource providers GET: /resource_providers request_headers: *project_member_headers status: 403 - name: project reader cannot list resource providers GET: /resource_providers request_headers: *project_reader_headers status: 403 - name: system admin can create resource providers POST: /resource_providers request_headers: *system_admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: system reader cannot create resource providers POST: /resource_providers request_headers: *system_reader_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 403 - name: system admin can delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_admin_headers status: 204 - name: project admin can create resource providers POST: /resource_providers request_headers: *project_admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: project member cannot create resource providers POST: /resource_providers request_headers: *project_member_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 403 - name: project reader cannot create resource providers POST: /resource_providers request_headers: *project_reader_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 403 - name: system admin can show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_admin_headers response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: system reader cannot show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_reader_headers status: 403 - name: project admin can show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_admin_headers response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: project member cannot show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_member_headers status: 403 - name: project reader cannot show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_reader_headers status: 403 - name: system admin can update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_admin_headers data: name: new name status: 200 response_json_paths: $.name: new name $.uuid: $ENVIRON['RP_UUID'] - name: system reader cannot update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_reader_headers data: name: new name status: 403 - name: project admin can update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_admin_headers data: name: new name status: 200 response_json_paths: $.name: new name $.uuid: $ENVIRON['RP_UUID'] - name: project member cannot update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_member_headers data: name: new name status: 403 - name: project reader cannot update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_reader_headers data: name: new name status: 403 - name: system reader cannot delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_reader_headers status: 403 - name: project member cannot delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_member_headers status: 403 - name: project reader cannot delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_reader_headers status: 403 - name: project admin can delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_admin_headers status: 204 # We tested that system admins can delete resource providers above ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider-links.yaml0000664000175000017500000001112400000000000032204 0ustar00zuulzuul00000000000000# Confirm that the links provided when getting one or more resources # providers are correct. They vary across different microversions. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json tests: - name: post new resource provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 201 - name: get rp latest GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement latest response_json_paths: $.links.`len`: 6 $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.links[?rel = "aggregates"].href: /resource_providers/$ENVIRON['RP_UUID']/aggregates $.links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages $.links[?rel = "allocations"].href: /resource_providers/$ENVIRON['RP_UUID']/allocations $.links[?rel = "traits"].href: /resource_providers/$ENVIRON['RP_UUID']/traits - name: get rp 1.0 GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement 1.0 response_json_paths: $.links.`len`: 3 $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages - name: get rp 1.1 desc: aggregates added in 1.1 GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement 1.1 response_json_paths: $.links.`len`: 4 $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages $.links[?rel = "aggregates"].href: /resource_providers/$ENVIRON['RP_UUID']/aggregates - name: get rp 1.5 desc: traits added after 1.5 GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement 1.5 response_json_paths: $.links.`len`: 4 $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages $.links[?rel = "aggregates"].href: /resource_providers/$ENVIRON['RP_UUID']/aggregates - name: get rp 1.6 desc: traits added in 1.6 GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement 1.6 response_json_paths: $.links.`len`: 5 $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages $.links[?rel = "aggregates"].href: /resource_providers/$ENVIRON['RP_UUID']/aggregates $.links[?rel = "traits"].href: /resource_providers/$ENVIRON['RP_UUID']/traits - name: get rp 1.7 desc: nothing new in 1.7 GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement 1.7 response_json_paths: $.links.`len`: 5 $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages $.links[?rel = "aggregates"].href: /resource_providers/$ENVIRON['RP_UUID']/aggregates $.links[?rel = "traits"].href: /resource_providers/$ENVIRON['RP_UUID']/traits - name: get rp allocations link added in 1.11 GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement 1.11 response_json_paths: $.links.`len`: 6 $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.links[?rel = "aggregates"].href: /resource_providers/$ENVIRON['RP_UUID']/aggregates $.links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages $.links[?rel = "allocations"].href: /resource_providers/$ENVIRON['RP_UUID']/allocations $.links[?rel = "traits"].href: /resource_providers/$ENVIRON['RP_UUID']/traits ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider-policy.yaml0000664000175000017500000000227500000000000032372 0ustar00zuulzuul00000000000000# This tests the individual CRUD operations on /resource_providers # using a non-admin user with an open policy configuration. The # response validation is intentionally minimal. fixtures: - OpenPolicyFixture defaults: request_headers: x-auth-token: user accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: list resource providers GET: /resource_providers response_json_paths: $.resource_providers: [] - name: create resource provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] data: name: new name status: 200 response_json_paths: $.name: new name $.uuid: $ENVIRON['RP_UUID'] - name: delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider-resources-query.yaml0000664000175000017500000001307000000000000034243 0ustar00zuulzuul00000000000000 fixtures: - AllocationFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json openstack-api-version: placement latest tests: - name: what is at resource providers GET: /resource_providers response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: $ENVIRON['RP_UUID'] $.resource_providers[0].name: $ENVIRON['RP_NAME'] $.resource_providers[0].links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.resource_providers[0].links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.resource_providers[0].links[?rel = "aggregates"].href: /resource_providers/$ENVIRON['RP_UUID']/aggregates $.resource_providers[0].links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages - name: post new resource provider POST: /resource_providers data: name: $ENVIRON['ALT_RP_NAME'] uuid: $ENVIRON['ALT_RP_UUID'] status: 200 response_headers: location: //resource_providers/[a-f0-9-]+/ - name: now 2 providers listed GET: /resource_providers response_json_paths: $.resource_providers.`len`: 2 - name: list resource providers providing resources filter before API 1.4 GET: /resource_providers?resources=VCPU:1 request_headers: openstack-api-version: placement 1.3 status: 400 response_strings: - 'Invalid query string parameters' response_json_paths: $.errors[0].title: Bad Request - name: list resource providers providing a badly-formatted resources filter GET: /resource_providers?resources=VCPU status: 400 response_strings: - 'Badly formed resources parameter. Expected resources query string parameter in form:' - 'Got: VCPU.' response_json_paths: $.errors[0].title: Bad Request - name: list resource providers providing a resources filter with non-integer amount GET: /resource_providers?resources=VCPU:fred status: 400 response_strings: - 'Requested resource VCPU expected positive integer amount.' - 'Got: fred.' response_json_paths: $.errors[0].title: Bad Request - name: list resource providers providing a resources filter with negative amount GET: /resource_providers?resources=VCPU:-2 status: 400 response_strings: - 'Requested resource VCPU requires amount >= 1.' - 'Got: -2.' response_json_paths: $.errors[0].title: Bad Request - name: list resource providers providing a resource class not existing GET: /resource_providers?resources=MYMISSINGCLASS:1 status: 400 response_strings: - 'Invalid resource class in resources parameter' response_json_paths: $.errors[0].title: Bad Request - name: list resource providers providing a bad trailing comma GET: /resource_providers?resources=DISK_GB:500, status: 400 response_strings: - 'Badly formed resources parameter. Expected resources query string parameter in form:' # NOTE(mriedem): The value is empty because splitting on the trailing # comma results in an empty string. - 'Got: .' response_json_paths: $.errors[0].title: Bad Request - name: list resource providers providing empty resources GET: /resource_providers?resources= status: 400 response_strings: - Badly formed resources parameter. Expected resources query string parameter in form - 'Got: empty string.' - name: list resource providers providing disk resources GET: /resource_providers?resources=DISK_GB:500 response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: $ENVIRON['RP_UUID'] - name: list resource providers providing disk and vcpu resources GET: /resource_providers?resources=DISK_GB:500,VCPU:2 response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: $ENVIRON['RP_UUID'] - name: list resource providers providing resources (no match - less than min_unit) GET: /resource_providers?resources=DISK_GB:1 response_json_paths: $.resource_providers.`len`: 0 - name: list resource providers providing resources (no match - more than max_unit) GET: /resource_providers?resources=DISK_GB:1010 response_json_paths: $.resource_providers.`len`: 0 - name: list resource providers providing resources (no match - not enough inventory) GET: /resource_providers?resources=DISK_GB:102400 response_json_paths: $.resource_providers.`len`: 0 - name: list resource providers providing resources (no match - bad step size) GET: /resource_providers?resources=DISK_GB:11 response_json_paths: $.resource_providers.`len`: 0 - name: list resource providers providing resources (no match - no inventory of resource) GET: /resource_providers?resources=MEMORY_MB:10240 response_json_paths: $.resource_providers.`len`: 0 - name: list resource providers providing resources (no match - not enough VCPU) GET: /resource_providers?resources=DISK_GB:500,VCPU:4 response_json_paths: $.resource_providers.`len`: 0 - name: associate an aggregate with rp1 PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates data: aggregates: - 83a3d69d-8920-48e2-8914-cadfd8fa2f91 resource_provider_generation: $HISTORY['list resource providers providing disk and vcpu resources'].$RESPONSE['$.resource_providers[0].generation'] status: 200 - name: get by aggregates with resources GET: '/resource_providers?member_of=in:83a3d69d-8920-48e2-8914-cadfd8fa2f91&resources=VCPU:2' response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: $ENVIRON['RP_UUID'] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider-secure-rbac.yaml0000664000175000017500000001763200000000000033271 0ustar00zuulzuul00000000000000--- fixtures: - SecureRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &admin_project_id $ENVIRON['ADMIN_PROJECT_ID'] - &service_project_id $ENVIRON['SERVICE_PROJECT_ID'] - &admin_headers x-auth-token: user x-roles: admin x-project-id: admin_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &service_headers x-auth-token: user x-roles: service x-project-id: service_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: admin can list resource providers GET: /resource_providers request_headers: *admin_headers response_json_paths: $.resource_providers: [] - name: service can list resource providers GET: /resource_providers request_headers: *service_headers response_json_paths: $.resource_providers: [] - name: system admin cannot list resource providers GET: /resource_providers request_headers: *system_admin_headers status: 403 - name: system reader cannot list resource providers GET: /resource_providers request_headers: *system_reader_headers status: 403 - name: project admin can list resource providers GET: /resource_providers request_headers: *project_admin_headers response_json_paths: $.resource_providers: [] - name: project member cannot list resource providers GET: /resource_providers request_headers: *project_member_headers status: 403 - name: project reader cannot list resource providers GET: /resource_providers request_headers: *project_reader_headers status: 403 - name: admin can create resource providers POST: /resource_providers request_headers: *admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: service can create resource providers POST: /resource_providers request_headers: *service_headers data: name: $ENVIRON['RP_NAME1'] uuid: $ENVIRON['RP_UUID1'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID1'] - name: system admin cannot create resource providers POST: /resource_providers request_headers: *system_admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 403 - name: system reader cannot create resource providers POST: /resource_providers request_headers: *system_reader_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 403 - name: project admin can create resource providers POST: /resource_providers request_headers: *project_admin_headers data: name: $ENVIRON['RP_NAME2'] uuid: $ENVIRON['RP_UUID2'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID2'] - name: project member cannot create resource providers POST: /resource_providers request_headers: *project_member_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 403 - name: project reader cannot create resource providers POST: /resource_providers request_headers: *project_reader_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 403 - name: admin can show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *admin_headers response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: service can show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *service_headers response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: system admin cannot show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_admin_headers status: 403 - name: system reader cannot show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_reader_headers status: 403 - name: project admin can show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_admin_headers response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: project member cannot show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_member_headers status: 403 - name: project reader cannot show resource provider GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_reader_headers status: 403 - name: admin can update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *admin_headers data: name: new name status: 200 response_json_paths: $.name: new name $.uuid: $ENVIRON['RP_UUID'] - name: service can update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *service_headers data: name: new name2 status: 200 response_json_paths: $.name: new name2 $.uuid: $ENVIRON['RP_UUID'] - name: system admin cannot update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_admin_headers data: name: new name status: 403 - name: system reader cannot update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_reader_headers data: name: new name status: 403 - name: project admin can update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_admin_headers data: name: new name3 status: 200 response_json_paths: $.name: new name3 $.uuid: $ENVIRON['RP_UUID'] - name: project member cannot update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_member_headers data: name: new name status: 403 - name: project reader cannot update resource provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_reader_headers data: name: new name status: 403 - name: system reader cannot delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_reader_headers status: 403 - name: project admin can delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID2'] request_headers: *project_admin_headers status: 204 - name: project member cannot delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_member_headers status: 403 - name: project reader cannot delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *project_reader_headers status: 403 - name: system admin cannot delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *system_admin_headers status: 403 - name: admin can delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] request_headers: *admin_headers status: 204 - name: service can delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID1'] request_headers: *service_headers status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/resource-provider.yaml0000664000175000017500000006743300000000000031104 0ustar00zuulzuul00000000000000 fixtures: - APIFixture defaults: request_headers: x-auth-token: admin accept: application/json openstack-api-version: placement latest tests: - name: what is at resource providers GET: /resource_providers request_headers: # microversion 1.15 for cache headers openstack-api-version: placement 1.15 response_json_paths: $.resource_providers: [] response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: non admin forbidden GET: /resource_providers request_headers: x-auth-token: user accept: application/json status: 403 response_json_paths: $.errors[0].title: Forbidden - name: route not found non json GET: /moo request_headers: accept: text/plain status: 404 response_strings: - The resource could not be found - name: post new resource provider - old microversion POST: /resource_providers request_headers: content-type: application/json openstack-api-version: placement 1.19 data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 201 response_headers: location: //resource_providers/[a-f0-9-]+/ response_forbidden_headers: - content-type - name: delete it DELETE: $LOCATION status: 204 - name: post new resource provider - new microversion POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 response_headers: location: //resource_providers/[a-f0-9-]+/ response_json_paths: $.uuid: $ENVIRON['RP_UUID'] $.name: $ENVIRON['RP_NAME'] $.parent_provider_uuid: null $.generation: 0 $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages # On this one, don't test for which field was a duplicate because # that depends on how the database reports columns. - name: try to create same all again POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 409 response_strings: - Conflicting resource provider response_json_paths: $.errors[0].title: Conflict - name: try to create same name again POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: ada30fb5-566d-4fe1-b43b-28a9e988790c status: 409 response_strings: - "Conflicting resource provider name: $ENVIRON['RP_NAME'] already exists" response_json_paths: $.errors[0].title: Conflict $.errors[0].code: placement.duplicate_name - name: confirm the correct post GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json openstack-api-version: placement 1.15 response_headers: content-type: application/json cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ response_json_paths: $.uuid: $ENVIRON['RP_UUID'] $.name: $ENVIRON['RP_NAME'] $.parent_provider_uuid: null $.generation: 0 $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages - name: get resource provider works with no accept GET: /resource_providers/$ENVIRON['RP_UUID'] response_headers: content-type: /application/json/ response_json_paths: $.uuid: $ENVIRON['RP_UUID'] - name: get non-existing resource provider GET: /resource_providers/d67370b5-4dc0-470d-a4fa-85e8e89abc6c status: 404 response_strings: - No resource provider with uuid d67370b5-4dc0-470d-a4fa-85e8e89abc6c found response_json_paths: $.errors[0].title: Not Found - name: list one resource providers GET: /resource_providers request_headers: openstack-api-version: placement 1.15 response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: $ENVIRON['RP_UUID'] $.resource_providers[0].name: $ENVIRON['RP_NAME'] $.resource_providers[0].generation: 0 $.resource_providers[0].parent_provider_uuid: null $.resource_providers[0].links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.resource_providers[0].links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.resource_providers[0].links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: filter out all resource providers by name GET: /resource_providers?name=flubblebubble response_json_paths: $.resource_providers.`len`: 0 - name: filter out all resource providers by uuid GET: /resource_providers?uuid=d67370b5-4dc0-470d-a4fa-85e8e89abc6c response_json_paths: $.resource_providers.`len`: 0 - name: list one resource provider filtering by name GET: /resource_providers?name=$ENVIRON['RP_NAME'] response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: $ENVIRON['RP_UUID'] $.resource_providers[0].name: $ENVIRON['RP_NAME'] $.resource_providers[0].links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.resource_providers[0].links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.resource_providers[0].links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages - name: list resource providers filtering by invalid uuid GET: /resource_providers?uuid=spameggs status: 400 response_strings: - 'Invalid query string parameters' response_json_paths: $.errors[0].title: Bad Request - name: list resource providers providing an invalid filter GET: /resource_providers?spam=eggs status: 400 response_strings: - 'Invalid query string parameters' response_json_paths: $.errors[0].title: Bad Request - name: list one resource provider filtering by uuid with allocations link GET: /resource_providers?uuid=$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement 1.11 response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: $ENVIRON['RP_UUID'] $.resource_providers[0].name: $ENVIRON['RP_NAME'] $.resource_providers[0].links.`len`: 6 $.resource_providers[0].links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.resource_providers[0].links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.resource_providers[0].links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages $.resource_providers[0].links[?rel = "allocations"].href: /resource_providers/$ENVIRON['RP_UUID']/allocations - name: list one resource provider filtering by uuid no allocations link GET: /resource_providers?uuid=$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement 1.10 response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[0].uuid: $ENVIRON['RP_UUID'] $.resource_providers[0].name: $ENVIRON['RP_NAME'] $.resource_providers[0].links.`len`: 5 $.resource_providers[0].links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] $.resource_providers[0].links[?rel = "inventories"].href: /resource_providers/$ENVIRON['RP_UUID']/inventories $.resource_providers[0].links[?rel = "usages"].href: /resource_providers/$ENVIRON['RP_UUID']/usages - name: update a resource provider's name PUT: /resource_providers/$RESPONSE['$.resource_providers[0].uuid'] request_headers: content-type: application/json openstack-api-version: placement 1.15 data: name: new name status: 200 response_headers: content-type: /application/json/ cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ response_forbidden_headers: - location response_json_paths: $.generation: 0 $.name: new name $.uuid: $ENVIRON['RP_UUID'] $.links[?rel = "self"].href: /resource_providers/$ENVIRON['RP_UUID'] - name: check the name from that update GET: $LAST_URL response_json_paths: $.name: new name - name: update a provider poorly PUT: $LAST_URL request_headers: content-type: application/json data: badfield: new name status: 400 response_strings: - 'JSON does not validate' response_json_paths: $.errors[0].title: Bad Request # This section of tests validate nested resource provider relationships and # constraints. We attempt to set the parent provider UUID for the primary # resource provider to a UUID value of a provider we have not yet created and # expect a failure. We then create that parent provider record and attempt to # set the same parent provider UUID without also setting the root provider UUID # to the same value, with an expected failure. Finally, we set the primary # provider's root AND parent to the new provider UUID and verify success. - name: test POST microversion limits nested providers POST: /resource_providers request_headers: openstack-api-version: placement 1.13 content-type: application/json data: name: child parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 400 response_strings: - 'JSON does not validate' - name: test PUT microversion limits nested providers PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: openstack-api-version: placement 1.13 content-type: application/json data: name: child parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 400 response_strings: - 'JSON does not validate' - name: fail trying to set a root provider UUID PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json data: root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 400 response_strings: - 'JSON does not validate' - name: fail trying to self-parent POST: /resource_providers request_headers: content-type: application/json data: name: child uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] parent_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] status: 400 response_strings: - 'parent provider UUID cannot be same as UUID' - 'Unable to create resource provider \"child\", $ENVIRON["ALT_PARENT_PROVIDER_UUID"]:' - name: update a parent provider UUID to non-existing provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json data: name: parent parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 400 response_strings: - 'parent provider UUID does not exist' - name: now create the parent provider POST: /resource_providers request_headers: content-type: application/json data: name: parent uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['PARENT_PROVIDER_UUID'] $.name: parent $.parent_provider_uuid: null $.generation: 0 - name: get provider with old microversion no root provider UUID field GET: /resource_providers/$ENVIRON['PARENT_PROVIDER_UUID'] request_headers: openstack-api-version: placement 1.13 content-type: application/json response_json_paths: $.`len`: 4 name: parent status: 200 - name: get provider has root provider UUID field GET: /resource_providers/$ENVIRON['PARENT_PROVIDER_UUID'] request_headers: content-type: application/json response_json_paths: $.`len`: 6 name: parent root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] parent_provider_uuid: null status: 200 - name: update a parent PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json data: name: child parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 200 - name: get provider has new parent and root provider UUID field GET: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json response_json_paths: name: child root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 200 - name: fail trying to un-parent with old microversion PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json openstack-api-version: placement 1.36 data: name: child parent_provider_uuid: null status: 400 response_strings: - 'un-parenting a provider is not currently allowed' - name: un-parent provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json openstack-api-version: placement 1.37 data: name: child parent_provider_uuid: null status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] $.name: 'child' $.parent_provider_uuid: null $.root_provider_uuid: $ENVIRON['RP_UUID'] - name: re-parent back to its original parent after un-parent PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json openstack-api-version: placement 1.37 data: name: child parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] $.name: child $.parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] $.root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] - name: 409 conflict while trying to delete parent with existing child DELETE: /resource_providers/$ENVIRON['PARENT_PROVIDER_UUID'] status: 409 response_strings: - "Unable to delete parent resource provider $ENVIRON['PARENT_PROVIDER_UUID']: It has child resource providers." response_json_paths: $.errors[0].code: placement.resource_provider.cannot_delete_parent - name: list all resource providers in a tree that does not exist GET: /resource_providers?in_tree=$ENVIRON['ALT_PARENT_PROVIDER_UUID'] response_json_paths: $.resource_providers.`len`: 0 - name: list all resource providers in a tree with multiple providers in tree GET: /resource_providers?in_tree=$ENVIRON['RP_UUID'] response_json_paths: $.resource_providers.`len`: 2 # Verify that we have both the parent and child in the list $.resource_providers[?uuid="$ENVIRON['PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] - name: create a new parent provider POST: /resource_providers request_headers: content-type: application/json data: name: altwparent uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] status: 200 response_headers: location: //resource_providers/[a-f0-9-]+/ response_json_paths: $.uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.name: altwparent - name: list all resource providers in a tree GET: /resource_providers?in_tree=$ENVIRON['ALT_PARENT_PROVIDER_UUID'] response_json_paths: $.resource_providers.`len`: 1 $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] - name: filter providers by traits none of them have GET: /resource_providers?required=HW_CPU_X86_SGX,HW_CPU_X86_SHA response_json_paths: $.resource_providers.`len`: 0 - name: add traits to a provider PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json data: resource_provider_generation: 0 traits: ['HW_CPU_X86_SGX', 'STORAGE_DISK_SSD'] - name: add traits to another provider PUT: /resource_providers/$ENVIRON['ALT_PARENT_PROVIDER_UUID']/traits request_headers: content-type: application/json data: resource_provider_generation: 0 traits: ['MISC_SHARES_VIA_AGGREGATE', 'STORAGE_DISK_SSD'] - name: filter providers with multiple traits where no provider has all of them GET: /resource_providers?required=HW_CPU_X86_SGX,MISC_SHARES_VIA_AGGREGATE response_json_paths: $.resource_providers.`len`: 0 - name: filter providers with a trait some of them have GET: /resource_providers?required=STORAGE_DISK_SSD response_json_paths: $.resource_providers.`len`: 2 # Don't really care about the root UUID - just validating that the providers present are the ones we expected $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] - name: list providers with 'required' parameter filters cumulatively with in_tree GET: /resource_providers?required=STORAGE_DISK_SSD&in_tree=$ENVIRON['RP_UUID'] response_json_paths: $.resource_providers.`len`: 1 # Only RP_UUID satisfies both the tree and trait constraint $.resource_providers[?uuid="$ENVIRON['RP_UUID']"].root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] - name: list providers for full count GET: /resource_providers response_json_paths: $.resource_providers.`len`: 3 - name: list providers forbidden 1.22 GET: /resource_providers?required=!STORAGE_DISK_SSD response_json_paths: $.resource_providers.`len`: 1 - name: confirm forbidden trait not there GET: /resource_providers/$RESPONSE['$.resource_providers[0].uuid']/traits response_json_paths: $.traits: [] - name: list providers forbidden 1.21 GET: /resource_providers?required=!STORAGE_DISK_SSD request_headers: openstack-api-version: placement 1.21 status: 400 response_strings: - "Invalid query string parameters: Expected 'required' parameter value of the form: HW_CPU_X86_VMX,CUSTOM_MAGIC. Got: !STORAGE_DISK_SSD" - name: list providers forbidden again GET: /resource_providers?required=!MISC_SHARES_VIA_AGGREGATE response_json_paths: $.resource_providers.`len`: 2 - name: mixed required and forbidden GET: /resource_providers?required=!HW_CPU_X86_SGX,STORAGE_DISK_SSD response_json_paths: $.resource_providers.`len`: 1 - name: confirm mixed required and forbidden GET: /resource_providers/$RESPONSE['$.resource_providers[0].uuid']/traits response_json_paths: $.traits.`sorted`: ['MISC_SHARES_VIA_AGGREGATE', 'STORAGE_DISK_SSD'] - name: multiple forbidden GET: /resource_providers?required=!MISC_SHARES_VIA_AGGREGATE,!HW_CPU_X86_SGX response_json_paths: $.resource_providers.`len`: 1 - name: confirm multiple forbidden GET: /resource_providers/$RESPONSE['$.resource_providers[0].uuid']/traits response_json_paths: $.traits: [] - name: forbidden no apply GET: /resource_providers?required=!HW_CPU_X86_VMX response_json_paths: $.resource_providers.`len`: 3 - name: create some inventory PUT: /resource_providers/$ENVIRON['ALT_PARENT_PROVIDER_UUID']/inventories request_headers: content-type: application/json data: resource_provider_generation: 1 inventories: IPV4_ADDRESS: total: 253 DISK_GB: total: 1024 status: 200 response_json_paths: $.resource_provider_generation: 2 $.inventories.IPV4_ADDRESS.total: 253 $.inventories.IPV4_ADDRESS.reserved: 0 $.inventories.DISK_GB.total: 1024 $.inventories.DISK_GB.allocation_ratio: 1.0 - name: list providers with 'required' parameter filters cumulatively with resources GET: /resource_providers?required=STORAGE_DISK_SSD&resources=IPV4_ADDRESS:10 response_json_paths: $.resource_providers.`len`: 1 # Only ALT_PARENT_PROVIDER_UUID satisfies both the tree and trait constraint $.resource_providers[?uuid="$ENVIRON['ALT_PARENT_PROVIDER_UUID']"].root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] - name: invalid 'required' parameter - blank GET: /resource_providers?required= status: 400 response_strings: - "Invalid query string parameters: Expected 'required' parameter value of the form: HW_CPU_X86_VMX,!CUSTOM_MAGIC." response_json_paths: $.errors[0].title: Bad Request - name: invalid 'required' parameter 1.21 GET: /resource_providers?required= request_headers: openstack-api-version: placement 1.21 status: 400 response_strings: - "Invalid query string parameters: Expected 'required' parameter value of the form: HW_CPU_X86_VMX,CUSTOM_MAGIC." response_json_paths: $.errors[0].title: Bad Request - name: invalid 'required' parameter - contains an empty trait name GET: /resource_providers?required=STORAGE_DISK_SSD,,MISC_SHARES_VIA_AGGREGATE status: 400 response_strings: - "Invalid query string parameters: Expected 'required' parameter value of the form: HW_CPU_X86_VMX,!CUSTOM_MAGIC." response_json_paths: $.errors[0].title: Bad Request - name: invalid 'required' parameter - contains a nonexistent trait GET: /resource_providers?required=STORAGE_DISK_SSD,BOGUS_TRAIT,MISC_SHARES_VIA_AGGREGATE status: 400 response_strings: - "No such trait(s): BOGUS_TRAIT." response_json_paths: $.errors[0].title: Bad Request - name: schema validation fails with 'required' parameter on old microversion request_headers: openstack-api-version: placement 1.17 GET: /resource_providers?required=HW_CPU_X86_SGX,MISC_SHARES_VIA_AGGREGATE status: 400 response_strings: - Additional properties are not allowed response_json_paths: $.errors[0].title: Bad Request - name: fail trying to re-parent to a different provider with old microversion PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json openstack-api-version: placement 1.36 data: name: child parent_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] status: 400 response_strings: - 're-parenting a provider is not currently allowed' - name: re-parent to a different provider PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json openstack-api-version: placement 1.37 data: name: child parent_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] $.name: 'child' $.parent_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] $.root_provider_uuid: $ENVIRON['ALT_PARENT_PROVIDER_UUID'] - name: re-parent back to its original parent PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json openstack-api-version: placement 1.37 data: name: child parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 200 response_json_paths: $.uuid: $ENVIRON['RP_UUID'] $.name: child $.parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] $.root_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] - name: create a new provider POST: /resource_providers request_headers: content-type: application/json data: name: cow status: 200 - name: try to rename that provider to existing name PUT: $LOCATION request_headers: content-type: application/json data: name: child status: 409 response_json_paths: $.errors[0].title: Conflict $.errors[0].code: placement.duplicate_name - name: fail to put that provider with uuid PUT: $LAST_URL request_headers: content-type: application/json data: name: second new name uuid: 7d4275fc-8b40-4995-85e2-74fcec2cb3b6 status: 400 response_strings: - Additional properties are not allowed response_json_paths: $.errors[0].title: Bad Request - name: delete resource provider DELETE: $LAST_URL status: 204 - name: 404 on deleted provider DELETE: $LAST_URL status: 404 response_json_paths: $.errors[0].title: Not Found - name: fail to get a provider GET: /resource_providers/random_sauce status: 404 response_json_paths: $.errors[0].title: Not Found - name: delete non-existing resource provider DELETE: /resource_providers/d67370b5-4dc0-470d-a4fa-85e8e89abc6c status: 404 response_strings: - No resource provider with uuid d67370b5-4dc0-470d-a4fa-85e8e89abc6c found for delete response_json_paths: $.errors[0].title: Not Found - name: post resource provider no uuid POST: /resource_providers request_headers: content-type: application/json data: name: a name status: 200 response_headers: location: //resource_providers/[a-f0-9-]+/ - name: post malformed json as json POST: /resource_providers request_headers: content-type: application/json data: '{"foo": }' status: 400 response_strings: - 'Malformed JSON:' response_json_paths: $.errors[0].title: Bad Request - name: post bad uuid in resource provider POST: /resource_providers request_headers: content-type: application/json data: name: my bad rp uuid: this is not a uuid status: 400 response_strings: - "Failed validating 'format'" response_json_paths: $.errors[0].title: Bad Request - name: try to create resource provider with name exceed max characters POST: /resource_providers request_headers: content-type: application/json data: name: &name_exceeds_max_length_check This is a long text of 201 charactersssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss status: 400 response_strings: - "Failed validating 'maxLength'" response_json_paths: $.errors[0].title: Bad Request - name: try to update resource provider with name exceed max characters PUT: /resource_providers/$ENVIRON['RP_UUID'] request_headers: content-type: application/json data: name: *name_exceeds_max_length_check status: 400 response_strings: - "Failed validating 'maxLength'" response_json_paths: $.errors[0].title: Bad Request - name: confirm no cache-control headers before 1.15 GET: /resource_providers request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - cache-control - last-modified - name: fail updating a parent to itself PUT: /resource_providers/$ENVIRON['PARENT_PROVIDER_UUID'] request_headers: content-type: application/json data: name: parent parent_provider_uuid: $ENVIRON['PARENT_PROVIDER_UUID'] status: 400 response_strings: - 'creating loop in the provider tree is not allowed.' - name: fail updating the parent to point to its child PUT: /resource_providers/$ENVIRON['PARENT_PROVIDER_UUID'] request_headers: content-type: application/json data: name: parent parent_provider_uuid: $ENVIRON['RP_UUID'] status: 400 response_strings: - 'creating loop in the provider tree is not allowed.' - name: create a resource provider with dashed uuid POST: /resource_providers request_headers: content-type: application/json data: name: rp with dashed uuid uuid: 2290d4af-9e6e-400b-9d65-1ee01376f71a status: 200 response_headers: location: //resource_providers/[a-f0-9-]+/ - name: try to create with the same uuid but without dashes POST: /resource_providers request_headers: content-type: application/json data: name: rp with dashless uuid uuid: 2290d4af9e6e400b9d651ee01376f71a status: 409 response_strings: - "Conflicting resource provider uuid: 2290d4af-9e6e-400b-9d65-1ee01376f71a already exists" response_json_paths: $.errors[0].title: Conflict ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/same-subtree-deep.yaml0000664000175000017500000000553700000000000030731 0ustar00zuulzuul00000000000000# Test same_subtree with a deep hierarchy where the top levels of the tree # provide no resources. We create this by adding additional empty top # providers to the NUMANetworkFixture used elsewhere for testing same_subtree. fixtures: - DeepNUMANetworkFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json # version of request in which `same_subtree` is supported openstack-api-version: placement 1.36 tests: - name: deep subtree 2VFs, one compute GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 required_COMPUTE: CUSTOM_FOO required_NIC: CUSTOM_HW_NIC_ROOT resources_PORT1: CUSTOM_VF:2 required_PORT1: CUSTOM_PHYSNET1 # Make sure that there is a chain of subtrees, compute->nic->port, so # that we only get results where _PORT1 is anchored under _NIC, which # is anchored under _COMPUTE. # _COMPUTE, _NIC, _PORT1 in one same_subtree would allow some _PORT1 # results to be independent of _NIC (while still sharing the _COMPUTE # ancestor), leading to 12 allocation requests instead of 4. same_subtree: - _NIC,_COMPUTE - _NIC,_PORT1 group_policy: none # Create an anchor of this response verification, used below to signify that # each of three tests expects the same responses. response_json_paths: &json_response $.provider_summaries.`len`: 26 $.allocation_requests.`len`: 4 $.allocation_requests..mappings._COMPUTE: # 4 cn2_uuid each as a list, no other computes - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] - - $ENVIRON['CN2_UUID'] $.allocation_requests..allocations['$ENVIRON["CN2_UUID"]'].resources.VCPU: [1, 1, 1, 1] $.allocation_requests..allocations['$ENVIRON["PF1_1_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF3_1_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF2_1_UUID"]'].resources.CUSTOM_VF: 2 $.allocation_requests..allocations['$ENVIRON["PF2_3_UUID"]'].resources.CUSTOM_VF: 2 - name: deep subtree 2VFs, with foo GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 required_COMPUTE: CUSTOM_FOO resources_PORT1: CUSTOM_VF:2 required_PORT1: CUSTOM_PHYSNET1 same_subtree: _COMPUTE,_PORT1 group_policy: none response_json_paths: <<: *json_response - name: deep subtree 2VFs, no foo GET: /allocation_candidates query_parameters: resources_COMPUTE: VCPU:1 resources_PORT1: CUSTOM_VF:2 required_PORT1: CUSTOM_PHYSNET1 same_subtree: _COMPUTE,_PORT1 group_policy: none response_json_paths: <<: *json_response ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/shared-resources.yaml0000664000175000017500000001015200000000000030665 0ustar00zuulzuul00000000000000# Create a shared resource provider that shares a custom resource # class with a compute node and confirm that it is returned when # requesting resources. # # NOTE(cdent): raw uuids are used here instead of environment variables as # there's no need to query on them or change them, but something has to be # there. fixtures: - APIFixture defaults: request_headers: x-auth-token: admin content-type: application/json accept: application/json openstack-api-version: placement latest tests: - name: create compute node 1 POST: /resource_providers data: name: cn1 uuid: 8d830468-6395-46b0-b56a-f934a1d60bbe status: 200 - name: cn1 inventory PUT: /resource_providers/8d830468-6395-46b0-b56a-f934a1d60bbe/inventories data: resource_provider_generation: 0 inventories: VCPU: total: 20 MEMORY_MB: total: 100000 status: 200 - name: create compute node 2 POST: /resource_providers data: name: cn2 uuid: ed6ea55d-01ce-4e11-ba97-13a4e5540b3e status: 200 - name: cn2 inventory PUT: /resource_providers/ed6ea55d-01ce-4e11-ba97-13a4e5540b3e/inventories data: resource_provider_generation: 0 inventories: VCPU: total: 20 MEMORY_MB: total: 100000 DISK_GB: total: 100000 status: 200 - name: create custom magic PUT: /resource_classes/CUSTOM_MAGIC status: 201 - name: create shared 1 POST: /resource_providers data: uuid: d450bd39-3b01-4355-9ea1-594f96594cf1 name: custom magic share status: 200 - name: shared 1 inventory PUT: /resource_providers/d450bd39-3b01-4355-9ea1-594f96594cf1/inventories data: resource_provider_generation: 0 inventories: CUSTOM_MAGIC: total: 5 status: 200 # no aggregate association - name: get resources no agg GET: /resource_providers?resources=VCPU:1,CUSTOM_MAGIC:1 response_json_paths: $.resource_providers.`len`: 0 - name: get allocation candidates no agg desc: this sometimes fails GET: /allocation_candidates?resources=VCPU:1,CUSTOM_MAGIC:1 response_json_paths: $.allocation_requests.`len`: 0 $.provider_summaries.`len`: 0 - name: aggregate shared PUT: /resource_providers/d450bd39-3b01-4355-9ea1-594f96594cf1/aggregates data: aggregates: - f3dc0f36-97d4-4daf-be0c-d71466da9c85 resource_provider_generation: 1 - name: aggregate cn1 PUT: /resource_providers/8d830468-6395-46b0-b56a-f934a1d60bbe/aggregates data: aggregates: - f3dc0f36-97d4-4daf-be0c-d71466da9c85 resource_provider_generation: 1 # no shared trait - name: get resources no shared GET: /resource_providers?resources=VCPU:1,CUSTOM_MAGIC:1 response_json_paths: $.resource_providers.`len`: 0 - name: get allocation candidates no shared GET: /allocation_candidates?resources=VCPU:1,CUSTOM_MAGIC:1 response_json_paths: $.allocation_requests.`len`: 0 $.provider_summaries.`len`: 0 - name: set trait shared PUT: /resource_providers/d450bd39-3b01-4355-9ea1-594f96594cf1/traits data: resource_provider_generation: 2 traits: - MISC_SHARES_VIA_AGGREGATE # this should be zero because we only expect those resource providers which # can fully satisfy the resources query themselves when making requests of # /resource_providers. This may change in the future depending on use # cases. This test and the next demonstrate and confirm that # /resource_providers and /allocation_candidates have different behaviors. - name: get resources shared GET: /resource_providers?resources=VCPU:1,CUSTOM_MAGIC:1 response_json_paths: $.resource_providers.`len`: 0 # this is one allocation request and two resource providers because # at /allocation_candidates we expect those resource providers which # can either fully the resources query or can do so with the # assistance of a sharing provider. - name: get allocation candidates shared GET: /allocation_candidates?resources=VCPU:1,CUSTOM_MAGIC:1 response_json_paths: $.allocation_requests.`len`: 1 $.provider_summaries.`len`: 2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/traits-legacy-rbac.yaml0000664000175000017500000000555700000000000031101 0ustar00zuulzuul00000000000000--- fixtures: - LegacyRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: project member cannot list traits GET: /traits request_headers: *project_member_headers status: 403 - name: project admin can list traits GET: /traits request_headers: *project_admin_headers status: 200 - name: project member cannot create trait PUT: /traits/CUSTOM_TRAIT_X request_headers: *project_member_headers status: 403 - name: project admin can create trait PUT: /traits/CUSTOM_TRAIT_X request_headers: *project_admin_headers status: 201 - name: project member cannot show trait GET: /traits/CUSTOM_TRAIT_X request_headers: *project_member_headers status: 403 - name: project admin can show trait GET: /traits/CUSTOM_TRAIT_X request_headers: *project_admin_headers status: 204 - name: project admin can create resource provider POST: /resource_providers request_headers: *project_admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: project member cannot list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_member_headers status: 403 - name: project admin can list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_admin_headers status: 200 - name: project member cannot update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_member_headers status: 403 data: traits: - CUSTOM_TRAIT_X resource_provider_generation: 0 - name: project admin can update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_admin_headers status: 200 data: traits: - CUSTOM_TRAIT_X resource_provider_generation: 0 - name: project member cannot delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_member_headers status: 403 - name: project admin can delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_admin_headers status: 204 - name: project member cannot delete trait DELETE: /traits/CUSTOM_TRAIT_X request_headers: *project_member_headers status: 403 - name: project admin cannot delete trait DELETE: /traits/CUSTOM_TRAIT_X request_headers: *project_admin_headers status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/traits-policy.yaml0000664000175000017500000000241100000000000030211 0ustar00zuulzuul00000000000000# This tests the individual CRUD operations on # /traits* and /resource_providers/{uuid}/traits using a non-admin user with an # open policy configuration. The response validation is intentionally minimal. fixtures: - OpenPolicyFixture defaults: request_headers: x-auth-token: user accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: list traits GET: /traits status: 200 - name: create a trait PUT: /traits/CUSTOM_TRAIT_X status: 201 - name: show trait GET: /traits/CUSTOM_TRAIT_X status: 204 - name: create resource provider POST: /resource_providers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits status: 200 - name: update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json status: 200 data: traits: - CUSTOM_TRAIT_X resource_provider_generation: 0 - name: delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits status: 204 - name: delete trait DELETE: /traits/CUSTOM_TRAIT_X status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/traits-secure-rbac.yaml0000664000175000017500000002300000000000000031102 0ustar00zuulzuul00000000000000--- fixtures: - SecureRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &admin_project_id $ENVIRON['ADMIN_PROJECT_ID'] - &service_project_id $ENVIRON['SERVICE_PROJECT_ID'] - &admin_headers x-auth-token: user x-roles: admin x-project-id: admin_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &service_headers x-auth-token: user x-roles: service x-project-id: service_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: admin can list traits GET: /traits request_headers: *admin_headers status: 200 - name: service can list traits GET: /traits request_headers: *service_headers status: 200 - name: project admin can list traits GET: /traits request_headers: *project_admin_headers status: 200 - name: project member cannot list traits GET: /traits request_headers: *project_member_headers status: 403 - name: project reader cannot list traits GET: /traits request_headers: *project_reader_headers status: 403 - name: system reader cannot list traits GET: /traits request_headers: *system_reader_headers status: 403 - name: system admin cannot list traits GET: /traits request_headers: *system_admin_headers status: 403 - name: admin can create trait PUT: /traits/CUSTOM_TRAIT_X request_headers: *admin_headers status: 201 - name: service can create trait PUT: /traits/CUSTOM_TRAIT_X1 request_headers: *service_headers status: 201 - name: project admin can create trait PUT: /traits/CUSTOM_TRAIT_X2 request_headers: *project_admin_headers status: 201 - name: project member cannot create trait PUT: /traits/CUSTOM_TRAIT_X request_headers: *project_member_headers status: 403 - name: project reader cannot create trait PUT: /traits/CUSTOM_TRAIT_X request_headers: *project_reader_headers status: 403 - name: system reader cannot create trait PUT: /traits/CUSTOM_TRAIT_X request_headers: *system_reader_headers status: 403 - name: system admin cannot create trait PUT: /traits/CUSTOM_TRAIT_X request_headers: *system_admin_headers status: 403 - name: admin can show trait GET: /traits/CUSTOM_TRAIT_X request_headers: *admin_headers status: 204 - name: service can show trait GET: /traits/CUSTOM_TRAIT_X request_headers: *service_headers status: 204 - name: project admin can show trait GET: /traits/CUSTOM_TRAIT_X request_headers: *project_admin_headers status: 204 - name: project member cannot show trait GET: /traits/CUSTOM_TRAIT_X request_headers: *project_member_headers status: 403 - name: project reader cannot show trait GET: /traits/CUSTOM_TRAIT_X request_headers: *project_reader_headers status: 403 - name: system reader cannot show trait GET: /traits/CUSTOM_TRAIT_X request_headers: *system_reader_headers status: 403 - name: system admin cannot show trait GET: /traits/CUSTOM_TRAIT_X request_headers: *system_admin_headers status: 403 - name: admin can create resource provider POST: /resource_providers request_headers: *admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: service can create resource providers POST: /resource_providers request_headers: *service_headers data: name: $ENVIRON['RP_NAME1'] uuid: $ENVIRON['RP_UUID1'] status: 200 - name: project admin can create resource providers POST: /resource_providers request_headers: *project_admin_headers data: name: $ENVIRON['RP_NAME2'] uuid: $ENVIRON['RP_UUID2'] status: 200 - name: admin can list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *admin_headers status: 200 - name: service can list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *service_headers status: 200 - name: project admin can list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_admin_headers status: 200 - name: project member cannot list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_member_headers status: 403 - name: project reader cannot list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_reader_headers status: 403 - name: system reader cannot list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *system_reader_headers status: 403 - name: system admin cannot list resource provider traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *system_admin_headers status: 403 - name: project admin can update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID2']/traits request_headers: *project_admin_headers status: 200 data: traits: - CUSTOM_TRAIT_X2 resource_provider_generation: 0 - name: project member cannot update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_member_headers status: 403 data: traits: - CUSTOM_TRAIT_X resource_provider_generation: 0 - name: project reader cannot update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_reader_headers status: 403 data: traits: - CUSTOM_TRAIT_X resource_provider_generation: 0 - name: system reader cannot update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *system_reader_headers status: 403 data: traits: - CUSTOM_TRAIT_X resource_provider_generation: 0 - name: system admin cannot update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *system_admin_headers status: 403 data: traits: - CUSTOM_TRAIT_X resource_provider_generation: 0 - name: admin can update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *admin_headers status: 200 data: traits: - CUSTOM_TRAIT_X resource_provider_generation: 0 - name: service can update resource provider traits PUT: /resource_providers/$ENVIRON['RP_UUID1']/traits request_headers: *service_headers status: 200 data: traits: - CUSTOM_TRAIT_X1 resource_provider_generation: 0 - name: project admin can delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID2']/traits request_headers: *project_admin_headers status: 204 - name: project member cannot delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_member_headers status: 403 - name: project reader cannot delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *project_reader_headers status: 403 - name: system reader cannot delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *system_reader_headers status: 403 - name: system admin cannot delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *system_admin_headers status: 403 - name: admin can delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: *admin_headers status: 204 - name: service can delete resource provider traits DELETE: /resource_providers/$ENVIRON['RP_UUID1']/traits request_headers: *service_headers status: 204 - name: project admin can delete trait DELETE: /traits/CUSTOM_TRAIT_X2 request_headers: *project_admin_headers status: 204 - name: project member cannot delete trait DELETE: /traits/CUSTOM_TRAIT_X request_headers: *project_member_headers status: 403 - name: project reader cannot delete trait DELETE: /traits/CUSTOM_TRAIT_X request_headers: *project_reader_headers status: 403 - name: system reader cannot delete trait DELETE: /traits/CUSTOM_TRAIT_X request_headers: *system_reader_headers status: 403 - name: system admin cannot delete trait DELETE: /traits/CUSTOM_TRAIT_X request_headers: *system_admin_headers status: 403 - name: admin can delete trait DELETE: /traits/CUSTOM_TRAIT_X request_headers: *admin_headers status: 204 - name: service can delete trait DELETE: /traits/CUSTOM_TRAIT_X1 request_headers: *service_headers status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/traits.yaml0000664000175000017500000003321100000000000026716 0ustar00zuulzuul00000000000000 fixtures: - APIFixture defaults: request_headers: x-auth-token: admin # traits introduced in 1.6 openstack-api-version: placement 1.6 tests: - name: create a trait without custom namespace PUT: /traits/TRAIT_X status: 400 response_strings: - 'The trait is invalid. A valid trait must be no longer than 255 characters, start with the prefix \"CUSTOM_\" and use following characters: \"A\"-\"Z\", \"0\"-\"9\" and \"_\"' - name: create a trait with invalid characters PUT: /traits/CUSTOM_ABC:1 status: 400 response_strings: - 'The trait is invalid. A valid trait must be no longer than 255 characters, start with the prefix \"CUSTOM_\" and use following characters: \"A\"-\"Z\", \"0\"-\"9\" and \"_\"' - name: create a trait with name exceed max characters PUT: /traits/CUSTOM_ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNO status: 400 response_strings: - 'The trait is invalid. A valid trait must be no longer than 255 characters, start with the prefix \"CUSTOM_\" and use following characters: \"A\"-\"Z\", \"0\"-\"9\" and \"_\"' - name: create a trait earlier version PUT: /traits/CUSTOM_TRAIT_1 request_headers: openstack-api-version: placement 1.5 status: 404 - name: create a trait PUT: /traits/CUSTOM_TRAIT_1 status: 201 response_headers: location: //traits/CUSTOM_TRAIT_1/ response_forbidden_headers: - content-type # PUT in 1.6 version should not have cache headers - cache-control - last-modified - name: create a trait which existed PUT: /traits/CUSTOM_TRAIT_1 status: 204 response_headers: location: //traits/CUSTOM_TRAIT_1/ response_forbidden_headers: - content-type - name: get a trait earlier version GET: /traits/CUSTOM_TRAIT_1 request_headers: openstack-api-version: placement 1.5 status: 404 - name: get a trait GET: /traits/CUSTOM_TRAIT_1 status: 204 response_forbidden_headers: - content-type # In early versions cache headers should not be present - cache-control - last-modified - name: get a non-existed trait GET: /traits/NON_EXISTED status: 404 - name: delete a trait earlier version DELETE: /traits/CUSTOM_TRAIT_1 request_headers: openstack-api-version: placement 1.5 status: 404 - name: delete a trait DELETE: /traits/CUSTOM_TRAIT_1 status: 204 response_forbidden_headers: - content-type # DELETE in any version should not have cache headers - cache-control - last-modified - name: delete a non-existed trait DELETE: /traits/CUSTOM_NON_EXSITED status: 404 - name: try to delete standard trait DELETE: /traits/HW_CPU_X86_SSE status: 400 response_strings: - Cannot delete standard trait - name: create CUSTOM_TRAIT_1 PUT: /traits/CUSTOM_TRAIT_1 status: 201 response_headers: location: //traits/CUSTOM_TRAIT_1/ response_forbidden_headers: - content-type - name: create CUSTOM_TRAIT_2 PUT: /traits/CUSTOM_TRAIT_2 status: 201 response_headers: location: //traits/CUSTOM_TRAIT_2/ response_forbidden_headers: - content-type # NOTE(cdent): This simply tests that traits we know should be # present are in the results. We can't check length here because # the standard traits, which will grow over time, are present. - name: list traits GET: /traits status: 200 response_strings: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 - MISC_SHARES_VIA_AGGREGATE - HW_CPU_X86_SHA - name: list traits earlier version GET: /traits request_headers: openstack-api-version: placement 1.5 status: 404 - name: list traits with invalid format of name parameter GET: /traits?name=in_abc status: 400 response_strings: - 'Badly formatted name parameter. Expected name query string parameter in form: ?name=[in|startswith]:[name1,name2|prefix]. Got: \"in_abc\"' - name: list traits with name=in filter GET: /traits?name=in:CUSTOM_TRAIT_1,CUSTOM_TRAIT_2 status: 200 response_json_paths: $.traits.`len`: 2 response_strings: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 - name: create CUSTOM_ANOTHER_TRAIT PUT: /traits/CUSTOM_ANOTHER_TRAIT status: 201 response_headers: location: //traits/CUSTOM_ANOTHER_TRAIT/ response_forbidden_headers: - content-type - name: list traits with prefix GET: /traits?name=startswith:CUSTOM_TRAIT status: 200 response_json_paths: $.traits.`len`: 2 response_strings: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 - name: list traits with invalid parameters GET: /traits?invalid=abc status: 400 response_strings: - "Invalid query string parameters: Additional properties are not allowed" - name: list traits 1.14 no cache headers GET: /traits request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - cache-control - last-modified - name: list traits 1.15 has cache headers GET: /traits request_headers: openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: get trait 1.14 no cache headers GET: /traits/CUSTOM_TRAIT_1 request_headers: openstack-api-version: placement 1.14 status: 204 response_forbidden_headers: - cache-control - last-modified - name: get trait 1.15 has cache headers GET: /traits/CUSTOM_TRAIT_1 request_headers: openstack-api-version: placement 1.15 status: 204 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: put trait 1.14 no cache headers PUT: /traits/CUSTOM_TRAIT_1 request_headers: openstack-api-version: placement 1.14 status: 204 response_forbidden_headers: - cache-control - last-modified - name: put trait 1.15 has cache headers PUT: /traits/CUSTOM_TRAIT_1 request_headers: openstack-api-version: placement 1.15 status: 204 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: post new resource provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 201 response_headers: location: //resource_providers/[a-f0-9-]+/ response_forbidden_headers: - content-type - name: list traits for resource provider earlier version GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: openstack-api-version: placement 1.5 status: 404 - name: list traits for resource provider without traits GET: /resource_providers/$ENVIRON['RP_UUID']/traits status: 200 response_json_paths: $.resource_provider_generation: 0 $.traits.`len`: 0 response_forbidden_headers: # In 1.6 no cache headers - cache-control - last-modified - name: set traits for resource provider earlier version PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json openstack-api-version: placement 1.5 status: 404 - name: set traits for resource provider PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json status: 200 data: traits: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 resource_provider_generation: 0 response_json_paths: $.resource_provider_generation: 1 $.traits.`len`: 2 response_strings: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 response_forbidden_headers: # In 1.6 no cache headers - cache-control - last-modified - name: get associated traits GET: /traits?associated=true status: 200 response_json_paths: $.traits.`len`: 2 response_strings: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 - name: get associated traits with invalid value GET: /traits?associated=xyz status: 400 response_strings: - 'The query parameter \"associated\" only accepts \"true\" or \"false\"' - name: set traits for resource provider without resource provider generation PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json status: 400 data: traits: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 response_strings: - "'resource_provider_generation' is a required property" - name: set traits for resource provider with invalid resource provider generation PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json status: 400 data: traits: - CUSTOM_TRAIT_1 resource_provider_generation: invalid_generation response_strings: - "'invalid_generation' is not of type 'integer'" - name: set traits for resource provider with conflict generation PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json openstack-api-version: placement 1.23 status: 409 data: traits: - CUSTOM_TRAIT_1 resource_provider_generation: 5 response_strings: - Resource provider's generation already changed. Please update the generation and try again. response_json_paths: $.errors[0].code: placement.concurrent_update - name: set non existed traits for resource provider PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json status: 400 data: traits: - NON_EXISTED_TRAIT1 - NON_EXISTED_TRAIT2 - CUSTOM_TRAIT_1 resource_provider_generation: 1 response_strings: - No such trait - NON_EXISTED_TRAIT1 - NON_EXISTED_TRAIT2 - name: set traits for resource provider with invalid type of traits PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json status: 400 data: traits: invalid_type resource_provider_generation: 1 response_strings: - "'invalid_type' is not of type 'array'" - name: set traits for resource provider with additional properties PUT: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: content-type: application/json status: 400 data: traits: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 resource_provider_generation: 1 additional: additional response_strings: - 'Additional properties are not allowed' - name: set traits for non_existed resource provider PUT: /resource_providers/non_existed/traits request_headers: content-type: application/json data: traits: - CUSTOM_TRAIT_1 resource_provider_generation: 1 status: 404 response_strings: - No resource provider with uuid non_existed found - name: list traits for resource provider GET: /resource_providers/$ENVIRON['RP_UUID']/traits status: 200 response_json_paths: $.resource_provider_generation: 1 $.traits.`len`: 2 response_strings: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 - name: delete an in-use trait DELETE: /traits/CUSTOM_TRAIT_1 status: 409 response_strings: - The trait CUSTOM_TRAIT_1 is in use by a resource provider. - name: list traits for non_existed resource provider GET: /resource_providers/non_existed/traits request_headers: content-type: application/json status: 404 response_strings: - No resource provider with uuid non_existed found - name: delete traits for resource provider earlier version DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: openstack-api-version: placement 1.5 status: 404 - name: delete traits for resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID']/traits status: 204 response_forbidden_headers: - content-type - name: delete traits for non_existed resource provider DELETE: /resource_providers/non_existed/traits status: 404 response_strings: - No resource provider with uuid non_existed found - name: empty traits for resource provider 1.15 has cache headers GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: update rp trait 1.14 no cache headers PUT: /resource_providers/$ENVIRON['RP_UUID']/traits data: traits: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 resource_provider_generation: 2 request_headers: openstack-api-version: placement 1.14 content-type: application/json response_forbidden_headers: - cache-control - last-modified - name: update rp trait 1.15 has cache headers PUT: /resource_providers/$ENVIRON['RP_UUID']/traits data: traits: - CUSTOM_TRAIT_1 - CUSTOM_TRAIT_2 resource_provider_generation: 3 request_headers: openstack-api-version: placement 1.15 content-type: application/json response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: list traits for resource provider 1.14 no cache headers GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: openstack-api-version: placement 1.14 response_forbidden_headers: - cache-control - last-modified - name: list traits for resource provider 1.15 has cache headers GET: /resource_providers/$ENVIRON['RP_UUID']/traits request_headers: openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/unicode.yaml0000664000175000017500000000154500000000000027043 0ustar00zuulzuul00000000000000 fixtures: - APIFixture defaults: request_headers: accept: application/json x-auth-token: admin tests: - name: get an encoded snowman desc: this should fall through to a NotFound on the resource provider object GET: /resources_providers/%e2%98%83 status: 404 - name: post resource provider with snowman POST: /resource_providers request_headers: content-type: application/json data: name: ☃ uuid: $ENVIRON['RP_UUID'] status: 201 response_headers: location: //resource_providers/[a-f0-9-]+/ - name: get that resource provider GET: $LOCATION response_json_paths: $.name: ☃ - name: query by name GET: /resource_providers?name=%e2%98%83 response_json_paths: $.resource_providers[0].name: ☃ - name: delete that one DELETE: /resource_providers/$ENVIRON['RP_UUID'] status: 204 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/usage-legacy-rbac.yaml0000664000175000017500000000267300000000000030673 0ustar00zuulzuul00000000000000--- fixtures: - LegacyRBACPolicyFixture vars: - &project_id 9520f97991e94f30a8dd205ef3ce735a - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: project admin can create resource provider POST: /resource_providers request_headers: *project_admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: project member cannot list provider usage GET: /resource_providers/$ENVIRON['RP_UUID']/usages request_headers: *project_member_headers status: 403 - name: project admin can list provider usage GET: /resource_providers/$ENVIRON['RP_UUID']/usages request_headers: *project_admin_headers status: 200 response_json_paths: usages: {} - name: project member cannot get total usage for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *project_member_headers status: 403 - name: project admin can get total usage for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *project_admin_headers status: 200 response_json_paths: usages: {} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/usage-policy.yaml0000664000175000017500000000150300000000000030010 0ustar00zuulzuul00000000000000# This tests the individual CRUD operations on # /resource_providers/{uuid}/usages and /usages # using a non-admin user with an open policy configuration. The # response validation is intentionally minimal. fixtures: - OpenPolicyFixture defaults: request_headers: x-auth-token: user accept: application/json openstack-api-version: placement latest tests: - name: create provider POST: /resource_providers request_headers: content-type: application/json data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: list provider usages GET: /resource_providers/$ENVIRON['RP_UUID']/usages response_json_paths: usages: {} - name: get total usages for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] response_json_paths: usages: {} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/usage-secure-rbac.yaml0000664000175000017500000001404400000000000030710 0ustar00zuulzuul00000000000000--- fixtures: - SecureRBACPolicyFixture vars: - &project_id $ENVIRON['PROJECT_ID'] - &project_id_alt $ENVIRON['PROJECT_ID_ALT'] - &admin_project_id $ENVIRON['ADMIN_PROJECT_ID'] - &service_project_id $ENVIRON['SERVICE_PROJECT_ID'] - &admin_headers x-auth-token: user x-roles: admin x-project-id: admin_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &service_headers x-auth-token: user x-roles: service x-project-id: service_project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &system_admin_headers x-auth-token: user x-roles: admin,member,reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &system_reader_headers x-auth-token: user x-roles: reader accept: application/json content-type: application/json openstack-api-version: placement latest openstack-system-scope: all - &project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id accept: application/json content-type: application/json openstack-api-version: placement latest - &alt_project_admin_headers x-auth-token: user x-roles: admin,member,reader x-project-id: *project_id_alt accept: application/json content-type: application/json openstack-api-version: placement latest - &alt_project_member_headers x-auth-token: user x-roles: member,reader x-project-id: *project_id_alt accept: application/json content-type: application/json openstack-api-version: placement latest - &alt_project_reader_headers x-auth-token: user x-roles: reader x-project-id: *project_id_alt accept: application/json content-type: application/json openstack-api-version: placement latest tests: - name: admin can create resource provider POST: /resource_providers request_headers: *admin_headers data: name: $ENVIRON['RP_NAME'] uuid: $ENVIRON['RP_UUID'] status: 200 - name: project admin can list provider usage GET: /resource_providers/$ENVIRON['RP_UUID']/usages request_headers: *project_admin_headers status: 200 response_json_paths: usages: {} - name: admin can list provider usage GET: /resource_providers/$ENVIRON['RP_UUID']/usages request_headers: *admin_headers status: 200 response_json_paths: usages: {} - name: service can list provider usage GET: /resource_providers/$ENVIRON['RP_UUID']/usages request_headers: *service_headers status: 200 response_json_paths: usages: {} - name: project member cannot list provider usage GET: /resource_providers/$ENVIRON['RP_UUID']/usages request_headers: *project_member_headers status: 403 - name: project reader cannot list provider usage GET: /resource_providers/$ENVIRON['RP_UUID']/usages request_headers: *project_reader_headers status: 403 - name: system reader cannot list provider usage GET: /resource_providers/$ENVIRON['RP_UUID']/usages request_headers: *system_reader_headers status: 403 - name: system admin cannot list provider usage GET: /resource_providers/$ENVIRON['RP_UUID']/usages request_headers: *system_admin_headers status: 403 - name: project admin can get total usage for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *project_admin_headers status: 200 response_json_paths: usages: {} - name: project member can get total usage for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *project_member_headers status: 200 response_json_paths: usages: {} - name: project reader can get total usage for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *project_reader_headers status: 200 response_json_paths: usages: {} # Make sure users from other projects can't snoop around for usage on projects # they have no business knowing about. - name: project member cannot get total usage for unauthorized project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *alt_project_member_headers status: 403 - name: project reader cannot get total usage for unauthorized project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *alt_project_reader_headers status: 403 # Admin in any project(legacy admin) will be able to get usage on other # projects. - name: admin can get total usage for other project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *alt_project_admin_headers status: 200 - name: project member cannot get total usage for unauthorized project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *alt_project_member_headers status: 403 - name: project reader cannot get total usage for unauthorized project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *alt_project_reader_headers status: 403 - name: admin can get total usage for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *admin_headers status: 200 response_json_paths: usages: {} - name: service can get total usage for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *service_headers status: 200 response_json_paths: usages: {} - name: system reader cannot get total usage for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *system_reader_headers status: 403 - name: system admin cannot get total usage for project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: *system_admin_headers status: 403 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/usage.yaml0000664000175000017500000000741000000000000026516 0ustar00zuulzuul00000000000000# More interesting tests for usages are in with_allocations fixtures: - APIFixture defaults: request_headers: accept: application/json x-auth-token: admin tests: - name: fail to get usages for missing provider GET: /resource_providers/fae14fa3-4b43-498c-a33c-4a1d00edb577/usages status: 404 response_strings: - No resource provider with uuid fae14fa3-4b43-498c-a33c-4a1d00edb577 found response_json_paths: $.errors[0].title: Not Found - name: create provider POST: /resource_providers request_headers: content-type: application/json data: name: a name status: 201 - name: check provider exists GET: $LOCATION response_json_paths: name: a name - name: get empty usages GET: $LAST_URL/usages request_headers: content-type: application/json response_json_paths: usages: {} - name: get usages no cache headers base microversion GET: $LAST_URL response_forbidden_headers: - last-modified - cache-control - name: get usages cache headers 1.15 GET: $LAST_URL request_headers: openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ - name: get total usages earlier version GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: openstack-api-version: placement 1.8 status: 404 - name: get total usages no project or user GET: /usages request_headers: openstack-api-version: placement 1.9 status: 400 - name: get empty usages with project id GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: openstack-api-version: placement 1.9 response_json_paths: usages: {} - name: get empty usages with project id and user id GET: /usages?project_id=$ENVIRON['PROJECT_ID']&user_id=78725f09-5c01-4c9e-97a5-98d75e1e32b1 request_headers: openstack-api-version: placement 1.9 response_json_paths: usages: {} - name: get total usages project_id less than min length GET: /usages?project_id= request_headers: openstack-api-version: placement 1.9 status: 400 response_strings: - "Failed validating 'minLength'" - name: get total usages user_id less than min length GET: /usages?project_id=$ENVIRON['PROJECT_ID']&user_id= request_headers: openstack-api-version: placement 1.9 status: 400 response_strings: - "Failed validating 'minLength'" - name: get total usages project_id exceeds max length GET: /usages?project_id=78725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b1 request_headers: openstack-api-version: placement 1.9 status: 400 response_strings: - "Failed validating 'maxLength'" - name: get total usages user_id exceeds max length GET: /usages?project_id=$ENVIRON['PROJECT_ID']&user_id=78725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b178725f09-5c01-4c9e-97a5-98d75e1e32b1 request_headers: openstack-api-version: placement 1.9 status: 400 response_strings: - "Failed validating 'maxLength'" - name: get total usages with additional param GET: /usages?project_id=$ENVIRON['PROJECT_ID']&user_id=78725f09-5c01-4c9e-97a5-98d75e1e32b1&dummy=1 request_headers: openstack-api-version: placement 1.9 status: 400 response_strings: - "Additional properties are not allowed" ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/gabbits/with-allocations.yaml0000664000175000017500000001145200000000000030674 0ustar00zuulzuul00000000000000 fixtures: - AllocationFixture defaults: request_headers: x-auth-token: admin tests: - name: confirm inventories GET: /resource_providers/$ENVIRON['RP_UUID']/inventories response_json_paths: $.inventories.DISK_GB.total: 2048 $.inventories.DISK_GB.reserved: 0 - name: get usages GET: /resource_providers/$ENVIRON['RP_UUID']/usages response_headers: # use a regex here because charset, which is not only not # required but superfluous, is present content-type: /application/json/ response_json_paths: $.resource_provider_generation: 5 $.usages.DISK_GB: 1020 $.usages.VCPU: 7 - name: get allocations GET: /resource_providers/$ENVIRON['RP_UUID']/allocations response_headers: content-type: /application/json/ response_json_paths: $.allocations.`len`: 3 $.allocations["$ENVIRON['CONSUMER_0']"].resources: DISK_GB: 1000 $.allocations["$ENVIRON['CONSUMER_ID']"].resources: VCPU: 6 $.allocations["$ENVIRON['ALT_CONSUMER_ID']"].resources: VCPU: 1 DISK_GB: 20 $.resource_provider_generation: 5 - name: fail to delete resource provider DELETE: /resource_providers/$ENVIRON['RP_UUID'] status: 409 response_strings: - "Unable to delete resource provider $ENVIRON['RP_UUID']: Resource provider has allocations." - name: fail to change inventory via put 1.23 PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: accept: application/json content-type: application/json openstack-api-version: placement 1.23 data: resource_provider_generation: 5 inventories: {} status: 409 response_json_paths: $.errors[0].code: placement.inventory.inuse - name: fail to delete all inventory DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: accept: application/json openstack-api-version: placement 1.5 status: 409 response_headers: content-type: /application/json/ response_strings: - "Inventory for 'VCPU, DISK_GB' on resource provider '$ENVIRON['RP_UUID']' in use" - name: fail to delete all inventory 1.23 DELETE: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: accept: application/json openstack-api-version: placement 1.23 status: 409 response_headers: content-type: /application/json/ response_strings: - "Inventory for 'VCPU, DISK_GB' on resource provider '$ENVIRON['RP_UUID']' in use" response_json_paths: $.errors[0].code: placement.inventory.inuse # We can change inventory in a way that makes existing allocations exceed the # new capacity. This is allowed. - name: change inventory despite capacity exceeded PUT: /resource_providers/$ENVIRON['RP_UUID']/inventories request_headers: accept: application/json content-type: application/json data: resource_provider_generation: 5 inventories: DISK_GB: total: 1019 VCPU: total: 97 status: 200 - name: get total usages by project GET: /usages?project_id=$ENVIRON['PROJECT_ID'] request_headers: openstack-api-version: placement 1.9 status: 200 response_json_paths: $.usages.DISK_GB: 1020 $.usages.VCPU: 7 - name: get total usages by project and user GET: /usages?project_id=$ENVIRON['PROJECT_ID']&user_id=$ENVIRON['USER_ID'] request_headers: openstack-api-version: placement 1.9 status: 200 response_json_paths: $.usages.DISK_GB: 1000 $.usages.VCPU: 6 - name: get total usages by project and alt user GET: /usages?project_id=$ENVIRON['PROJECT_ID']&user_id=$ENVIRON['ALT_USER_ID'] request_headers: openstack-api-version: placement 1.9 status: 200 # In pre 1.15 microversions cache headers not present response_forbidden_headers: - last-modified - cache-control response_json_paths: $.usages.DISK_GB: 20 $.usages.VCPU: 1 - name: get allocations without project and user GET: /allocations/$ENVIRON['CONSUMER_ID'] request_headers: openstack-api-version: placement 1.11 accept: application/json response_json_paths: # only one key in the top level object $.`len`: 1 - name: get allocations with project and user GET: /allocations/$ENVIRON['CONSUMER_ID'] request_headers: openstack-api-version: placement 1.12 accept: application/json response_json_paths: $.project_id: $ENVIRON['PROJECT_ID'] $.user_id: $ENVIRON['USER_ID'] $.`len`: 3 - name: get total usages with cache headers GET: /usages?project_id=$ENVIRON['PROJECT_ID']&user_id=$ENVIRON['ALT_USER_ID'] request_headers: openstack-api-version: placement 1.15 response_headers: cache-control: no-cache # Does last-modified look like a legit timestamp? last-modified: /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/ ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/test_allocation.py0000664000175000017500000001447000000000000026655 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import os_resource_classes as orc from oslo_serialization import jsonutils from oslo_utils.fixture import uuidsentinel as uuids from placement import direct from placement import exception from placement.objects import project as project_obj from placement.tests.functional import base class TestAllocationProjectCreateRace(base.TestCase): """Test that two allocation update request racing to create the project in the database. This test is added to reproduce the bug https://storyboard.openstack.org/#!/story/2009159 where the transaction that lost the project creation race fails as it tries to read the created project in the same transaction which is inactive due to the previous 'Duplicate entry' error. """ def setUp(self): super(TestAllocationProjectCreateRace, self).setUp() # Create resource provider and inventory for tests conf = self.conf_fixture.conf rp_data = jsonutils.dump_as_bytes({ 'name': 'a provider', 'uuid': uuids.rp, }) inv_data = jsonutils.dump_as_bytes({ 'inventories': { orc.VCPU: { 'total': 5, } }, 'resource_provider_generation': 0, }) self.headers = { 'x-auth-token': 'admin', 'content-type': 'application/json', 'OpenStack-API-Version': 'placement 1.38', 'X_ROLES': 'admin,service' } with direct.PlacementDirect(conf) as client: # Create a resource provider url = '/resource_providers' resp = client.post(url, data=rp_data, headers=self.headers) self.assertEqual(200, resp.status_code) # Add inventory to the resource provider url = '/resource_providers/%s/inventories' % uuids.rp resp = client.put(url, data=inv_data, headers=self.headers) self.assertEqual(200, resp.status_code) # simulate that when the below allocation update call tries to fetch # the project it gets ProjectNotFound but at the same time a # "parallel" transaction creates the project, so the project creation # will fail real_get_project = project_obj.Project.get_by_external_id def fake_get_project(cls, ctx, external_id): if not hasattr(fake_get_project, 'called'): proj = project_obj.Project(ctx, external_id=external_id) proj.create() fake_get_project.called = True raise exception.ProjectNotFound(external_id) else: return real_get_project(ctx, external_id) self.useFixture( fixtures.MonkeyPatch( 'placement.objects.project.Project.get_by_external_id', fake_get_project) ) def test_set_allocations_for_consumer(self): alloc_data = jsonutils.dump_as_bytes({ 'allocations': { uuids.rp: { 'resources': { orc.VCPU: 1, }, } }, 'project_id': uuids.project, 'user_id': uuids.user, 'consumer_generation': None, 'consumer_type': 'INSTANCE', }) conf = self.conf_fixture.conf with direct.PlacementDirect(conf) as client: # Create allocations url = '/allocations/%s' % uuids.consumer resp = client.put(url, data=alloc_data, headers=self.headers) # https://storyboard.openstack.org/#!/story/2009159 The expected # behavior would be that the allocation update succeeds as the # transaction can fetch the Project created by a racing transaction self.assertEqual(204, resp.status_code) def test_set_allocations(self): alloc_data = jsonutils.dump_as_bytes({ uuids.consumer: { 'project_id': uuids.project, 'user_id': uuids.user, 'consumer_generation': None, 'consumer_type': 'INSTANCE', 'allocations': { uuids.rp: { 'resources': { orc.VCPU: 1, }, } } } }) conf = self.conf_fixture.conf with direct.PlacementDirect(conf) as client: # Create allocations url = '/allocations' resp = client.post(url, data=alloc_data, headers=self.headers) self.assertEqual(204, resp.status_code) def test_reshape(self): alloc_data = jsonutils.dump_as_bytes({ 'allocations': { uuids.consumer: { 'allocations': { uuids.rp: { 'resources': { orc.VCPU: 1, }, } }, 'project_id': uuids.project, 'user_id': uuids.user, 'consumer_generation': None, 'consumer_type': 'INSTANCE', } }, 'inventories': { uuids.rp: { 'inventories': { orc.VCPU: { 'total': 5, } }, 'resource_provider_generation': 1, } } }) conf = self.conf_fixture.conf with direct.PlacementDirect(conf) as client: # Create allocations url = '/reshaper' resp = client.post(url, data=alloc_data, headers=self.headers) self.assertEqual(204, resp.status_code) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/test_allocation_candidates.py0000664000175000017500000002002600000000000031026 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections from placement import direct from placement.tests.functional import base from placement.tests.functional.db import test_base as tb class TestWideTreeAllocationCandidateExplosion(base.TestCase): """Test candidate generation ordering and limiting in wide symmetric trees, i.e. with trees of many similar child RPs. """ def setUp(self): super().setUp() self.headers = { 'x-auth-token': 'admin', 'content-type': 'application/json', 'OpenStack-API-Version': 'placement 1.38', 'X_ROLES': 'admin,service' } self.conf_fixture.conf.set_override( "max_allocation_candidates", 100000, group="placement") self.conf_fixture.conf.set_override( "allocation_candidates_generation_strategy", "breadth-first", group="placement") def create_tree(self, num_roots, num_child, num_res_per_child): self.roots = {} for i in range(num_roots): compute = tb.create_provider( self.context, f'compute{i}') self.roots[compute.uuid] = compute.name tb.add_inventory(compute, 'VCPU', 8) tb.add_inventory(compute, 'MEMORY_MB', 4096) tb.add_inventory(compute, 'DISK_GB', 500) for j in range(num_child): child = tb.create_provider( self.context, f'compute{i}:PF{j}', parent=compute.uuid) tb.add_inventory(child, 'CUSTOM_VF', num_res_per_child) @staticmethod def get_candidate_query(num_groups, num_res, limit): query = ("/allocation_candidates?" "resources=DISK_GB%3A20%2CMEMORY_MB%3A2048%2CVCPU%3A2") for g in range(num_groups): query += f"&resources{g}=CUSTOM_VF%3A{num_res}" query += "&group_policy=none" query += f"&limit={limit}" return query def _test_num_candidates_and_computes( self, computes, pfs, vfs_per_pf, req_groups, req_res_per_group, req_limit, expected_candidates, expected_computes_with_candidates ): self.create_tree( num_roots=computes, num_child=pfs, num_res_per_child=vfs_per_pf) conf = self.conf_fixture.conf with direct.PlacementDirect(conf) as client: resp = client.get( self.get_candidate_query( num_groups=req_groups, num_res=req_res_per_group, limit=req_limit), headers=self.headers) self.assertEqual(200, resp.status_code) body = resp.json() self.assertEqual(expected_candidates, len(body["allocation_requests"])) root_rps = set(self.roots.keys()) roots_with_candidates = set() nr_of_candidates_per_compute = collections.Counter() for ar in body["allocation_requests"]: allocated_rps = set(ar["allocations"].keys()) root_allocated_rps = allocated_rps.intersection(root_rps) roots_with_candidates |= root_allocated_rps nr_of_candidates_per_compute.update(root_allocated_rps) self.assertEqual( expected_computes_with_candidates, len(roots_with_candidates)) def test_all_candidates_generated_and_returned(self): self._test_num_candidates_and_computes( computes=2, pfs=8, vfs_per_pf=8, req_groups=2, req_res_per_group=1, req_limit=1000, expected_candidates=2 * 64, expected_computes_with_candidates=2,) def test_requested_limit_is_hit_result_balanced(self): # 8192 possible candidates, all generated, returned 1000, # result is balanced due to python sets usage self._test_num_candidates_and_computes( computes=2, pfs=8, vfs_per_pf=8, req_groups=4, req_res_per_group=1, req_limit=1000, expected_candidates=1000, expected_computes_with_candidates=2) def test_too_many_candidates_global_limit_is_hit_result_unbalanced(self): self.conf_fixture.conf.set_override( "allocation_candidates_generation_strategy", "depth-first", group="placement") # With max_allocation_candidates set to 100k limit this test now # runs in reasonable time (10 sec on my machine), without that it would # time out. # However, with depth-first strategy and with the global limit in place # only the first compute gets candidates. # 524288 valid candidates, the generation stops at 100k candidates, # only 1000 is returned, result is unbalanced as the first 100k # candidate is always from the first compute. self._test_num_candidates_and_computes( computes=2, pfs=8, vfs_per_pf=8, req_groups=6, req_res_per_group=1, req_limit=1000, expected_candidates=1000, expected_computes_with_candidates=1) def test_too_many_candidates_global_limit_is_hit_breadth_first_balanced( self ): # With max_allocation_candidates set to 100k limit this test now # runs in reasonable time (10 sec on my machine), without that it would # time out. # With the round-robin candidate generator in place the 100k generated # candidates spread across both computes now. # 524288 valid candidates, the generation stops at 100k candidates, # only 1000 is returned, result is balanced between the computes self._test_num_candidates_and_computes( computes=2, pfs=8, vfs_per_pf=8, req_groups=6, req_res_per_group=1, req_limit=1000, expected_candidates=1000, expected_computes_with_candidates=2) def test_global_limit_hit(self): # 8192 possible candidates, global limit is set to 8000, higher request # limit so number of candidates are limited by the global limit self.conf_fixture.conf.set_override( "max_allocation_candidates", 8000, group="placement") self._test_num_candidates_and_computes( computes=2, pfs=8, vfs_per_pf=8, req_groups=4, req_res_per_group=1, req_limit=9000, expected_candidates=8000, expected_computes_with_candidates=2) def test_no_global_limit(self): # 8192 possible candidates, there is no global limit, high request # limit so all candidates returned self.conf_fixture.conf.set_override( "max_allocation_candidates", -1, group="placement") self._test_num_candidates_and_computes( computes=2, pfs=8, vfs_per_pf=8, req_groups=4, req_res_per_group=1, req_limit=9000, expected_candidates=8192, expected_computes_with_candidates=2) def test_breadth_first_strategy_generates_stable_ordering(self): """Run the same query twice against the same two tree and assert that response text is exactly the same proving that even with breadth-first strategy the candidate ordering is stable. """ self.create_tree(num_roots=2, num_child=8, num_res_per_child=8) def query(): return client.get( self.get_candidate_query( num_groups=2, num_res=1, limit=1000), headers=self.headers) conf = self.conf_fixture.conf with direct.PlacementDirect(conf) as client: resp = query() self.assertEqual(200, resp.status_code) body1 = resp.text resp = query() self.assertEqual(200, resp.status_code) body2 = resp.text self.assertEqual(body1, body2) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/test_api.py0000664000175000017500000000305300000000000025274 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslotest import output import wsgi_intercept from gabbi import driver from placement.tests.functional.fixtures import capture from placement.tests.functional.fixtures import gabbits as fixtures # Check that wsgi application response headers are always # native str. wsgi_intercept.STRICT_RESPONSE_HEADERS = True TESTS_DIR = 'gabbits' def load_tests(loader, tests, pattern): """Provide a TestSuite to the discovery process.""" test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR) # These inner fixtures provide per test request output and log # capture, for cleaner results reporting. inner_fixtures = [ output.CaptureOutput, capture.Logging, ] return driver.build_tests(test_dir, loader, host=None, test_loader_name=__name__, intercept=fixtures.setup_app, inner_fixtures=inner_fixtures, fixture_module=fixtures) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/test_direct.py0000664000175000017500000000640200000000000025776 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_policy import opts as policy_opts from oslo_utils.fixture import uuidsentinel from placement import conf from placement import direct from placement.tests.functional import base class TestDirect(base.TestCase): def setUp(self): super(TestDirect, self).setUp() self.conf = cfg.ConfigOpts() conf.register_opts(self.conf) policy_opts.set_defaults(self.conf) def test_direct_is_there(self): with direct.PlacementDirect(self.conf) as client: resp = client.get('/') self.assertTrue(resp) data = resp.json() self.assertEqual('v1.0', data['versions'][0]['id']) def test_get_resource_providers(self): with direct.PlacementDirect(self.conf) as client: resp = client.get('/resource_providers') self.assertTrue(resp) data = resp.json() self.assertEqual([], data['resource_providers']) def test_create_resource_provider(self): data = {'name': 'fake'} with direct.PlacementDirect(self.conf) as client: resp = client.post('/resource_providers', json=data) self.assertTrue(resp) resp = client.get('/resource_providers') self.assertTrue(resp) data = resp.json() self.assertEqual(1, len(data['resource_providers'])) def test_json_validation_happens(self): data = {'name': 'fake', 'cowsay': 'moo'} with direct.PlacementDirect(self.conf) as client: resp = client.post('/resource_providers', json=data) self.assertFalse(resp) self.assertEqual(400, resp.status_code) def test_microversion_handling(self): with direct.PlacementDirect(self.conf) as client: # create parent parent_data = {'name': uuidsentinel.p_rp, 'uuid': uuidsentinel.p_rp} resp = client.post('/resource_providers', json=parent_data) self.assertTrue(resp, resp.text) # attempt to create child data = {'name': 'child', 'parent_provider_uuid': uuidsentinel.p_rp} # no microversion, 400 resp = client.post('/resource_providers', json=data) self.assertFalse(resp) self.assertEqual(400, resp.status_code) # low microversion, 400 resp = client.post('/resource_providers', json=data, microversion='1.13') self.assertFalse(resp) self.assertEqual(400, resp.status_code) resp = client.post('/resource_providers', json=data, microversion='1.14') self.assertTrue(resp, resp.text) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/test_lib_sync.py0000664000175000017500000000335200000000000026327 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os_resource_classes import os_traits from placement import direct from placement.tests.functional import base class TestLibSync(base.TestCase): """Test that traits and resource classes are synced from os-traits and os-resource-classes libs to the DB at service startup. """ def setUp(self): super().setUp() self.headers = { 'x-auth-token': 'admin', 'content-type': 'application/json', 'OpenStack-API-Version': 'placement latest', } def test_traits_sync(self): with direct.PlacementDirect(self.conf_fixture.conf) as client: resp = client.get('/traits', headers=self.headers) self.assertItemsEqual( os_traits.get_traits(), resp.json()['traits'], ) def test_resource_classes_sync(self): with direct.PlacementDirect(self.conf_fixture.conf) as client: resp = client.get('/resource_classes', headers=self.headers) self.assertItemsEqual( os_resource_classes.STANDARDS, [rc['name'] for rc in resp.json()['resource_classes']], resp.json(), ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/functional/test_verify_policy.py0000664000175000017500000000355500000000000027415 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from placement import direct from placement import handler from placement.tests.functional import base class TestVerifyPolicy(base.TestCase): """Verify that all defined placement routes have a policy.""" # Paths that don't need a policy check EXCEPTIONS = ['/', ''] def _test_request_403(self, client, method, route): headers = { 'x-auth-token': 'user', 'content-type': 'application/json' } request_method = getattr(client, method.lower()) # We send an empty request body on all requests. Because # policy handling comes before other processing, the value # of the body is irrelevant. response = request_method(route, data='', headers=headers) self.assertEqual( 403, response.status_code, 'method %s on route %s is open for user, status: %s' % (method, route, response.status_code)) def test_verify_policy(self): conf = self.conf_fixture.conf with direct.PlacementDirect(conf, latest_microversion=True) as client: for route, methods in handler.ROUTE_DECLARATIONS.items(): if route in self.EXCEPTIONS: continue for method in methods: self._test_request_403(client, method, route) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.280778 openstack_placement-13.0.0/placement/tests/unit/0000775000175000017500000000000000000000000021726 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/__init__.py0000664000175000017500000000000000000000000024025 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/base.py0000664000175000017500000000212200000000000023207 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import testtools class ContextTestCase(testtools.TestCase): """Base class for tests that need mocked attribute caches on Context. """ def setUp(self): super(ContextTestCase, self).setUp() self.useFixture( fixtures.MockPatch('placement.attribute_cache.ConsumerTypeCache')) self.useFixture( fixtures.MockPatch('placement.attribute_cache.ResourceClassCache')) self.useFixture( fixtures.MockPatch('placement.attribute_cache.TraitCache')) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.280778 openstack_placement-13.0.0/placement/tests/unit/cmd/0000775000175000017500000000000000000000000022471 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/cmd/__init__.py0000664000175000017500000000000000000000000024570 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/cmd/test_manage.py0000664000175000017500000002101200000000000025326 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from unittest import mock from oslo_config import cfg from oslo_config import fixture as config_fixture from oslotest import output import testtools from placement.cmd import manage from placement import conf from placement.tests.unit import base class TestCommandParsers(testtools.TestCase): def setUp(self): super(TestCommandParsers, self).setUp() self.conf = cfg.ConfigOpts() conf_fixture = config_fixture.Config(self.conf) self.useFixture(conf_fixture) conf.register_opts(conf_fixture.conf) # Quiet output from argparse (used within oslo_config). # If you are debugging, commenting this out might be useful. self.output = self.useFixture( output.CaptureOutput(do_stderr=True, do_stdout=True)) # We don't use a database, but we need to set the opt as # it's required for a valid config. conf_fixture.config(group="placement_database", connection='sqlite://') command_opts = manage.setup_commands(conf_fixture) # Command line opts must be registered on the conf_fixture, otherwise # they carry over globally. conf_fixture.register_cli_opts(command_opts) def test_commands_associated(self): """Test that commands get parsed as desired. This leaves out --version, which is built into oslo.config's handling. """ for command, args in [ ('db_version', ['db', 'version']), ('db_sync', ['db', 'sync']), ('db_stamp', ['db', 'stamp', 'b4ed3a175331']), ('db_online_data_migrations', ['db', 'online_data_migrations'])]: with mock.patch('placement.cmd.manage.DbCommands.' + command) as mock_command: self.conf(args, default_config_files=[]) self.conf.command.func() mock_command.assert_called_once_with() def test_non_command(self): """A non-existent command should fail.""" self.assertRaises(SystemExit, self.conf, ['pony'], default_config_files=[]) def test_empty_command(self): """An empty command should create no func.""" def parse_conf(): self.conf([], default_config_files=[]) def get_func(): return self.conf.command.func parse_conf() self.assertRaises(cfg.NoSuchOptError, get_func) def test_too_many_args(self): self.assertRaises(SystemExit, self.conf, ['version', '5'], default_config_files=[]) self.output.stderr.seek(0) if sys.version_info >= (3, 12, 8): message = "choose from db" else: message = "choose from 'db'" self.assertIn(message, self.output.stderr.read()) def test_help_message(self): """Test that help output for sub commands shows right commands.""" self.conf(['db'], default_config_files=[]) self.conf.command.func() self.output.stdout.seek(0) self.output.stderr.seek(0) self.assertIn('{sync,version,stamp,online_data_migrations}', self.output.stdout.read()) class TestDBCommands(base.ContextTestCase): def setUp(self): super(TestDBCommands, self).setUp() self.conf = cfg.ConfigOpts() conf_fixture = config_fixture.Config(self.conf) self.useFixture(conf_fixture) conf.register_opts(conf_fixture.conf) conf_fixture.config(group="placement_database", connection='sqlite://') command_opts = manage.setup_commands(conf_fixture) conf_fixture.register_cli_opts(command_opts) self.output = self.useFixture( output.CaptureOutput(do_stderr=True, do_stdout=True)) def _command_setup(self, max_count=None): command_list = ["db", "online_data_migrations"] if max_count is not None: command_list.extend(["--max-count", str(max_count)]) self.conf(command_list, project='placement', default_config_files=None) return manage.DbCommands(self.conf) def test_online_migrations(self): # Mock two online migrations mock_mig1 = mock.MagicMock(__name__="mock_mig_1") mock_mig2 = mock.MagicMock(__name__="mock_mig_2") mock_mig1.side_effect = [(10, 10), (0, 0)] mock_mig2.side_effect = [(15, 15), (0, 0)] mock_migrations = (mock_mig1, mock_mig2) with mock.patch('placement.cmd.manage.online_migrations', new=mock_migrations): commands = self._command_setup() commands.db_online_data_migrations() expected = '''\ Running batches of 50 until complete 10 rows matched query mock_mig_1, 10 migrated 15 rows matched query mock_mig_2, 15 migrated +------------+-------------+-----------+ | Migration | Total Found | Completed | +------------+-------------+-----------+ | mock_mig_1 | 10 | 10 | | mock_mig_2 | 15 | 15 | +------------+-------------+-----------+ ''' self.output.stdout.seek(0) self.assertEqual(expected, self.output.stdout.read()) def test_online_migrations_error(self): good_remaining = [50] def good_migration(context, count): found = good_remaining[0] done = min(found, count) good_remaining[0] -= done return found, done bad_migration = mock.MagicMock() bad_migration.side_effect = Exception("Mock Exception") bad_migration.__name__ = 'bad' mock_migrations = (bad_migration, good_migration) with mock.patch('placement.cmd.manage.online_migrations', new=mock_migrations): # bad_migration raises an exception, but it could be because # good_migration had not completed yet. We should get 1 in this # case, because some work was done, and the command should be # reiterated. commands = self._command_setup(max_count=50) self.assertEqual(1, commands.db_online_data_migrations()) # When running this for the second time, there's no work left for # good_migration to do, but bad_migration still fails - should # get 2 this time. self.assertEqual(2, commands.db_online_data_migrations()) # When --max-count is not used, we should get 2 if all possible # migrations completed but some raise exceptions commands = self._command_setup() good_remaining = [125] self.assertEqual(2, commands.db_online_data_migrations()) def test_online_migrations_bad_max(self): commands = self._command_setup(max_count=-2) self.assertEqual(127, commands.db_online_data_migrations()) commands = self._command_setup(max_count="a") self.assertEqual(127, commands.db_online_data_migrations()) commands = self._command_setup(max_count=0) self.assertEqual(127, commands.db_online_data_migrations()) def test_online_migrations_no_max(self): with mock.patch('placement.cmd.manage.DbCommands.' '_run_online_migration') as rm: rm.return_value = {}, False commands = self._command_setup() self.assertEqual(0, commands.db_online_data_migrations()) def test_online_migrations_finished(self): with mock.patch('placement.cmd.manage.DbCommands.' '_run_online_migration') as rm: rm.return_value = {}, False commands = self._command_setup(max_count=5) self.assertEqual(0, commands.db_online_data_migrations()) def test_online_migrations_not_finished(self): with mock.patch('placement.cmd.manage.DbCommands.' '_run_online_migration') as rm: rm.return_value = {'mig': (10, 5)}, False commands = self._command_setup(max_count=5) self.assertEqual(1, commands.db_online_data_migrations()) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.280778 openstack_placement-13.0.0/placement/tests/unit/handlers/0000775000175000017500000000000000000000000023526 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/handlers/__init__.py0000664000175000017500000000000000000000000025625 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/handlers/test_aggregate.py0000664000175000017500000000325400000000000027071 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for code in the aggregate handler that gabbi isn't covering.""" from unittest import mock import webob from placement import context from placement import exception from placement.handlers import aggregate from placement.objects import resource_provider from placement.tests.unit import base class TestAggregateHandlerErrors(base.ContextTestCase): """Tests that make sure errors hard to trigger by gabbi result in expected exceptions. """ def test_concurrent_exception_causes_409(self): fake_context = context.RequestContext( user_id='fake', project_id='fake') rp = resource_provider.ResourceProvider(fake_context) expected_message = ('Update conflict: Another thread concurrently ' 'updated the data') with mock.patch("placement.objects.resource_provider._set_aggregates", side_effect=exception.ConcurrentUpdateDetected): exc = self.assertRaises(webob.exc.HTTPConflict, aggregate._set_aggregates, rp, []) self.assertIn(expected_message, str(exc)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/handlers/test_resource_provider.py0000664000175000017500000000532600000000000030706 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for code in the resource provider handler that gabbi isn't covering. """ from unittest import mock import microversion_parse from oslo_db import exception as db_exc import webob from placement import context from placement.handlers import resource_provider from placement.tests.unit import base class TestAggregateHandlerErrors(base.ContextTestCase): @mock.patch('placement.context.RequestContext.can', new=mock.Mock()) def _test_duplicate_error_parsing_mysql(self, key): fake_context = context.RequestContext( user_id='fake', project_id='fake') req = webob.Request.blank( '/resource_providers', method='POST', content_type='application/json') req.body = b'{"name": "foobar"}' req.environ['placement.context'] = fake_context parse_version = microversion_parse.parse_version_string microversion = parse_version('1.15') microversion.max_version = parse_version('9.99') microversion.min_version = parse_version('1.0') req.environ['placement.microversion'] = microversion with mock.patch( 'placement.objects.resource_provider.ResourceProvider.create', side_effect=db_exc.DBDuplicateEntry(columns=[key]), ): response = req.get_response( resource_provider.create_resource_provider) self.assertEqual('409 Conflict', response.status) self.assertIn( 'Conflicting resource provider name: foobar already exists.', response.text) def test_duplicate_error_parsing_mysql_5x(self): """Ensure we parse the correct column on MySQL 5.x. On MySQL 5.x, DBDuplicateEntry.columns will contain the name of the column causing the integrity error. """ self._test_duplicate_error_parsing_mysql('name') def test_duplicate_error_parsing_mysql_8x(self): """Ensure we parse the correct column on MySQL 5.x. On MySQL 5.x, DBDuplicateEntry.columns will contain the name of the constraint causing the integrity error. """ self._test_duplicate_error_parsing_mysql( 'uniq_resource_providers0name') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/handlers/test_trait.py0000664000175000017500000000545000000000000026266 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for code in the trait handler that gabbi cannot easily cover.""" from unittest import mock import microversion_parse import webob from placement import context from placement import exception from placement.handlers import trait from placement.tests.unit import base class TestTraitHandler(base.ContextTestCase): @mock.patch('placement.objects.trait.Trait.create') @mock.patch('placement.objects.trait.Trait.get_by_name') @mock.patch('placement.context.RequestContext.can') @mock.patch('placement.util.wsgi_path_item', return_value='CUSTOM_FOOBAR') def test_trait_create_ordering( self, mock_path, mock_can, mock_get_by_name, mock_create): """Test that we call Trait.create when get_by_name has a TraitNotFound and that if create can't create, we assume 204. """ # The trait doesn't initially exist. mock_get_by_name.side_effect = exception.TraitNotFound( name='CUSTOM_FOOBAR') # But we fake that it does after first not finding it. mock_create.side_effect = exception.TraitExists( name='CUSTOM_FOOBAR') fake_context = context.RequestContext( user_id='fake', project_id='fake') req = webob.Request.blank('/traits/CUSTOM_FOOBAR') req.environ['placement.context'] = fake_context parse_version = microversion_parse.parse_version_string microversion = parse_version('1.15') microversion.max_version = parse_version('9.99') microversion.min_version = parse_version('1.0') req.environ['placement.microversion'] = microversion response = req.get_response(trait.put_trait) # Trait was assumed to exist. self.assertEqual('204 No Content', response.status) # We get a last modified header, even though we don't know the exact # create_at time (it is None on the Trait object and we fall back to # now) self.assertIn('last-modified', response.headers) # Confirm we checked to see if the trait exists, but the # side_effect happens mock_get_by_name.assert_called_once_with(fake_context, 'CUSTOM_FOOBAR') # Confirm we attempt to create the trait. mock_create.assert_called_once_with() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/handlers/test_util.py0000664000175000017500000003257300000000000026126 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the utility functions used by the placement DB.""" import fixtures import microversion_parse from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_utils.fixture import uuidsentinel import webob from placement import conf from placement import context from placement import exception from placement.handlers import util from placement import microversion from placement.objects import consumer as consumer_obj from placement.objects import project as project_obj from placement.objects import user as user_obj from placement.tests.unit import base class TestEnsureConsumer(base.ContextTestCase): def setUp(self): super(TestEnsureConsumer, self).setUp() self.conf = cfg.ConfigOpts() self.useFixture(config_fixture.Config(self.conf)) conf.register_opts(self.conf) self.mock_project_get = self.useFixture(fixtures.MockPatch( 'placement.objects.project.' 'Project.get_by_external_id')).mock self.mock_user_get = self.useFixture(fixtures.MockPatch( 'placement.objects.user.' 'User.get_by_external_id')).mock self.mock_consumer_get = self.useFixture(fixtures.MockPatch( 'placement.objects.consumer.' 'Consumer.get_by_uuid')).mock self.mock_project_create = self.useFixture(fixtures.MockPatch( 'placement.objects.project.' 'Project.create')).mock self.mock_user_create = self.useFixture(fixtures.MockPatch( 'placement.objects.user.' 'User.create')).mock self.mock_consumer_create = self.useFixture(fixtures.MockPatch( 'placement.objects.consumer.' 'Consumer.create')).mock self.mock_consumer_update = self.useFixture(fixtures.MockPatch( 'placement.objects.consumer.' 'Consumer.update')).mock self.ctx = context.RequestContext(user_id='fake', project_id='fake') self.ctx.config = self.conf self.consumer_id = uuidsentinel.consumer self.project_id = uuidsentinel.project self.user_id = uuidsentinel.user mv_parsed = microversion_parse.Version(1, 27) mv_parsed.max_version = microversion_parse.parse_version_string( microversion.max_version_string()) mv_parsed.min_version = microversion_parse.parse_version_string( microversion.min_version_string()) self.before_version = mv_parsed mv_parsed = microversion_parse.Version(1, 28) mv_parsed.max_version = microversion_parse.parse_version_string( microversion.max_version_string()) mv_parsed.min_version = microversion_parse.parse_version_string( microversion.min_version_string()) self.after_version = mv_parsed mv_parsed = microversion_parse.Version(1, 38) mv_parsed.max_version = microversion_parse.parse_version_string( microversion.max_version_string()) mv_parsed.min_version = microversion_parse.parse_version_string( microversion.min_version_string()) self.cons_type_req_version = mv_parsed def test_no_existing_project_user_consumer_before_gen_success(self): """Tests that we don't require a consumer_generation=None before the appropriate microversion. """ self.mock_project_get.side_effect = exception.NotFound self.mock_user_get.side_effect = exception.NotFound self.mock_consumer_get.side_effect = exception.NotFound consumer_gen = 1 # should be ignored util.ensure_consumer( self.ctx, self.consumer_id, self.project_id, self.user_id, consumer_gen, 'TYPE', self.before_version) self.mock_project_get.assert_called_once_with( self.ctx, self.project_id) self.mock_user_get.assert_called_once_with( self.ctx, self.user_id) self.mock_consumer_get.assert_called_once_with( self.ctx, self.consumer_id) self.mock_project_create.assert_called_once() self.mock_user_create.assert_called_once() self.mock_consumer_create.assert_called_once() def test_no_existing_project_user_consumer_after_gen_success(self): """Tests that we require a consumer_generation=None after the appropriate microversion. """ self.mock_project_get.side_effect = exception.NotFound self.mock_user_get.side_effect = exception.NotFound self.mock_consumer_get.side_effect = exception.NotFound consumer_gen = None # should NOT be ignored (and None is expected) util.ensure_consumer( self.ctx, self.consumer_id, self.project_id, self.user_id, consumer_gen, 'TYPE', self.after_version) self.mock_project_get.assert_called_once_with( self.ctx, self.project_id) self.mock_user_get.assert_called_once_with( self.ctx, self.user_id) self.mock_consumer_get.assert_called_once_with( self.ctx, self.consumer_id) self.mock_project_create.assert_called_once() self.mock_user_create.assert_called_once() self.mock_consumer_create.assert_called_once() def test_no_existing_project_user_consumer_after_gen_fail(self): """Tests that we require a consumer_generation=None after the appropriate microversion and that None is the expected value. """ self.mock_project_get.side_effect = exception.NotFound self.mock_user_get.side_effect = exception.NotFound self.mock_consumer_get.side_effect = exception.NotFound consumer_gen = 1 # should NOT be ignored (and 1 is not expected) self.assertRaises( webob.exc.HTTPConflict, util.ensure_consumer, self.ctx, self.consumer_id, self.project_id, self.user_id, consumer_gen, 'TYPE', self.after_version) def test_no_existing_project_user_consumer_use_incomplete(self): """Verify that if the project_id arg is None, that we fall back to the CONF options for incomplete project and user ID. """ self.mock_project_get.side_effect = exception.NotFound self.mock_user_get.side_effect = exception.NotFound self.mock_consumer_get.side_effect = exception.NotFound consumer_gen = None # should NOT be ignored (and None is expected) util.ensure_consumer( self.ctx, self.consumer_id, None, None, consumer_gen, 'TYPE', self.before_version) self.mock_project_get.assert_called_once_with( self.ctx, self.conf.placement.incomplete_consumer_project_id) self.mock_user_get.assert_called_once_with( self.ctx, self.conf.placement.incomplete_consumer_user_id) self.mock_consumer_get.assert_called_once_with( self.ctx, self.consumer_id) self.mock_project_create.assert_called_once() self.mock_user_create.assert_called_once() self.mock_consumer_create.assert_called_once() def test_existing_project_no_existing_consumer_before_gen_success(self): """Check that if we find an existing project and user, that we use those found objects in creating the consumer. Do not require a consumer generation before the appropriate microversion. """ proj = project_obj.Project(self.ctx, id=1, external_id=self.project_id) self.mock_project_get.return_value = proj user = user_obj.User(self.ctx, id=1, external_id=self.user_id) self.mock_user_get.return_value = user self.mock_consumer_get.side_effect = exception.NotFound consumer_gen = None # should be ignored util.ensure_consumer( self.ctx, self.consumer_id, self.project_id, self.user_id, consumer_gen, 'TYPE', self.before_version) self.mock_project_create.assert_not_called() self.mock_user_create.assert_not_called() self.mock_consumer_create.assert_called_once() def test_existing_consumer_after_gen_matches_supplied_gen(self): """Tests that we require a consumer_generation after the appropriate microversion and that when the consumer already exists, then we ensure a matching generation is supplied """ proj = project_obj.Project(self.ctx, id=1, external_id=self.project_id) self.mock_project_get.return_value = proj user = user_obj.User(self.ctx, id=1, external_id=self.user_id) self.mock_user_get.return_value = user consumer = consumer_obj.Consumer( self.ctx, id=1, project=proj, user=user, generation=2) self.mock_consumer_get.return_value = consumer consumer_gen = 2 # should NOT be ignored (and 2 is expected) util.ensure_consumer( self.ctx, self.consumer_id, self.project_id, self.user_id, consumer_gen, 'TYPE', self.after_version) self.mock_project_create.assert_not_called() self.mock_user_create.assert_not_called() self.mock_consumer_create.assert_not_called() def test_existing_consumer_after_gen_fail(self): """Tests that we require a consumer_generation after the appropriate microversion and that when the consumer already exists, then we raise a 400 when there is a mismatch on the existing generation. """ proj = project_obj.Project(self.ctx, id=1, external_id=self.project_id) self.mock_project_get.return_value = proj user = user_obj.User(self.ctx, id=1, external_id=self.user_id) self.mock_user_get.return_value = user consumer = consumer_obj.Consumer( self.ctx, id=1, project=proj, user=user, generation=42) self.mock_consumer_get.return_value = consumer consumer_gen = 2 # should NOT be ignored (and 2 is NOT expected) self.assertRaises( webob.exc.HTTPConflict, util.ensure_consumer, self.ctx, self.consumer_id, self.project_id, self.user_id, consumer_gen, 'TYPE', self.after_version) def test_existing_consumer_different_consumer_type_supplied(self): """Tests that we update a consumer's type ID if the one supplied by the user is different than the one in the existing record. """ proj = project_obj.Project(self.ctx, id=1, external_id=self.project_id) self.mock_project_get.return_value = proj user = user_obj.User(self.ctx, id=1, external_id=self.user_id) self.mock_user_get.return_value = user # Consumer currently has type ID = 1 consumer = consumer_obj.Consumer( self.ctx, id=1, project=proj, user=user, generation=1, consumer_type_id=1) self.mock_consumer_get.return_value = consumer consumer_gen = 1 consumer, created_new_consumer, request_attr = util.ensure_consumer( self.ctx, self.consumer_id, self.project_id, self.user_id, consumer_gen, 'TYPE', self.cons_type_req_version) util.update_consumers([consumer], {consumer.uuid: request_attr}) # Expect 1 call to update() to update to the supplied consumer type ID self.mock_consumer_update.assert_called_once_with() # Consumer should have the new consumer type from the cache self.assertEqual( self.ctx.ct_cache.id_from_string.return_value, consumer.consumer_type_id) def test_consumer_create_exists_different_consumer_type_supplied(self): """Tests that we update a consumer's type ID if the one supplied by a racing request is different than the one in the existing (recently created) record. """ proj = project_obj.Project(self.ctx, id=1, external_id=self.project_id) self.mock_project_get.return_value = proj user = user_obj.User(self.ctx, id=1, external_id=self.user_id) self.mock_user_get.return_value = user # Request A recently created consumer has type ID = 1 consumer = consumer_obj.Consumer( self.ctx, id=1, project=proj, user=user, generation=1, consumer_type_id=1, uuid=uuidsentinel.consumer) self.mock_consumer_get.return_value = consumer # Request B will encounter ConsumerExists as Request A just created it self.mock_consumer_create.side_effect = ( exception.ConsumerExists(uuid=uuidsentinel.consumer)) consumer_gen = 1 consumer, created_new_consumer, request_attr = util.ensure_consumer( self.ctx, self.consumer_id, self.project_id, self.user_id, consumer_gen, 'TYPE', self.cons_type_req_version) util.update_consumers([consumer], {consumer.uuid: request_attr}) # Expect 1 call to update() to update to the supplied consumer type ID self.mock_consumer_update.assert_called_once_with() # Consumer should have the new consumer type from the cache self.assertEqual( self.ctx.ct_cache.id_from_string.return_value, consumer.consumer_type_id) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.280778 openstack_placement-13.0.0/placement/tests/unit/objects/0000775000175000017500000000000000000000000023357 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/__init__.py0000664000175000017500000000000000000000000025456 0ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/base.py0000664000175000017500000000246100000000000024646 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_config import fixture as config_fixture from placement import conf from placement import context from placement.tests.unit import base as unit_base class TestCase(unit_base.ContextTestCase): """Base class for other tests in this file. It establishes the RequestContext used as self.context in the tests. """ def setUp(self): super(TestCase, self).setUp() self.user_id = 'fake-user' self.project_id = 'fake-project' self.context = context.RequestContext(self.user_id, self.project_id) config = cfg.ConfigOpts() self.conf_fixture = self.useFixture(config_fixture.Config(config)) conf.register_opts(config) self.context.config = config ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/test_allocation.py0000664000175000017500000000777100000000000027131 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from placement.objects import allocation as alloc_obj from placement.objects import resource_provider as rp_obj from placement.tests.unit.objects import base _RESOURCE_PROVIDER_ID = 1 _RESOURCE_PROVIDER_UUID = uuids.resource_provider _RESOURCE_PROVIDER_NAME = str(uuids.resource_name) _RESOURCE_CLASS_ID = 2 _ALLOCATION_ID = 2 _ALLOCATION_DB = { 'id': _ALLOCATION_ID, 'resource_provider_id': _RESOURCE_PROVIDER_ID, 'resource_class_id': _RESOURCE_CLASS_ID, 'consumer_uuid': uuids.fake_instance, 'consumer_id': 1, 'consumer_generation': 0, 'used': 8, 'user_id': 1, 'user_external_id': uuids.user_id, 'project_id': 1, 'project_external_id': uuids.project_id, 'updated_at': timeutils.utcnow(with_timezone=True), 'created_at': timeutils.utcnow(with_timezone=True), } _ALLOCATION_BY_CONSUMER_DB = { 'id': _ALLOCATION_ID, 'resource_provider_id': _RESOURCE_PROVIDER_ID, 'resource_class_id': _RESOURCE_CLASS_ID, 'consumer_uuid': uuids.fake_instance, 'consumer_id': 1, 'consumer_type_id': 1, 'consumer_generation': 0, 'used': 8, 'user_id': 1, 'user_external_id': uuids.user_id, 'project_id': 1, 'project_external_id': uuids.project_id, 'updated_at': timeutils.utcnow(with_timezone=True), 'created_at': timeutils.utcnow(with_timezone=True), 'resource_provider_name': _RESOURCE_PROVIDER_NAME, 'resource_provider_uuid': _RESOURCE_PROVIDER_UUID, 'resource_provider_generation': 0, } class TestAllocationListNoDB(base.TestCase): def setUp(self): super(TestAllocationListNoDB, self).setUp() @mock.patch('placement.objects.allocation.' '_get_allocations_by_provider_id', return_value=[_ALLOCATION_DB]) def test_get_all_by_resource_provider(self, mock_get_allocations_from_db): rp = rp_obj.ResourceProvider(self.context, id=_RESOURCE_PROVIDER_ID, uuid=uuids.resource_provider) allocations = alloc_obj.get_all_by_resource_provider(self.context, rp) self.assertEqual(1, len(allocations)) mock_get_allocations_from_db.assert_called_once_with( self.context, rp.id) self.assertEqual(_ALLOCATION_DB['used'], allocations[0].used) self.assertEqual(_ALLOCATION_DB['created_at'], allocations[0].created_at) self.assertEqual(_ALLOCATION_DB['updated_at'], allocations[0].updated_at) @mock.patch('placement.objects.allocation.' '_get_allocations_by_consumer_uuid', return_value=[_ALLOCATION_BY_CONSUMER_DB]) def test_get_all_by_consumer_id(self, mock_get_allocations_from_db): allocations = alloc_obj.get_all_by_consumer_id( self.context, uuids.consumer) self.assertEqual(1, len(allocations)) mock_get_allocations_from_db.assert_called_once_with(self.context, uuids.consumer) self.assertEqual(_ALLOCATION_BY_CONSUMER_DB['used'], allocations[0].used) self.assertEqual(_ALLOCATION_BY_CONSUMER_DB['created_at'], allocations[0].created_at) self.assertEqual(_ALLOCATION_BY_CONSUMER_DB['updated_at'], allocations[0].updated_at) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/test_allocation_candidate.py0000664000175000017500000001464100000000000031117 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from placement import lib as placement_lib from placement.objects import allocation_candidate as ac_obj from placement.objects import research_context as res_ctx from placement.tests.unit.objects import base class TestAllocationCandidatesNoDB(base.TestCase): @mock.patch('placement.objects.research_context._has_provider_trees', new=mock.Mock(return_value=True)) def test_limit_results(self): # Results are limited based on their root provider uuid, not uuid. # For a more "real" test of this functionality, one that exercises # nested providers, see the 'get allocation candidates nested limit' # test in the 'allocation-candidates.yaml' gabbit. aro_in = [ mock.Mock( resource_requests=[ mock.Mock(resource_provider=mock.Mock( root_provider_uuid=uuid)) for uuid in (1, 0, 4, 8)]), mock.Mock( resource_requests=[ mock.Mock(resource_provider=mock.Mock( root_provider_uuid=uuid)) for uuid in (4, 8, 5)]), mock.Mock( resource_requests=[ mock.Mock(resource_provider=mock.Mock( root_provider_uuid=uuid)) for uuid in (1, 7, 6, 4, 8, 5)]), ] sum1 = mock.Mock(resource_provider=mock.Mock(root_provider_uuid=1)) sum0 = mock.Mock(resource_provider=mock.Mock(root_provider_uuid=0)) sum4 = mock.Mock(resource_provider=mock.Mock(root_provider_uuid=4)) sum8 = mock.Mock(resource_provider=mock.Mock(root_provider_uuid=8)) sum5 = mock.Mock(resource_provider=mock.Mock(root_provider_uuid=5)) sum7 = mock.Mock(resource_provider=mock.Mock(root_provider_uuid=7)) sum6 = mock.Mock(resource_provider=mock.Mock(root_provider_uuid=6)) sum_in = [sum1, sum0, sum4, sum8, sum5, sum7, sum6] rw_ctx = res_ctx.RequestWideSearchContext( self.context, placement_lib.RequestWideParams(limit=2), True) aro, sum = rw_ctx.limit_results(aro_in, sum_in) self.assertEqual(aro_in[:2], aro) self.assertEqual(set([sum1, sum0, sum4, sum8, sum5]), set(sum)) def test_check_same_subtree(self): # Construct a tree that look like this # # 0 -+- 00 --- 000 1 -+- 10 --- 100 # | | # +- 01 -+- 010 +- 11 -+- 110 # | +- 011 | +- 111 # +- 02 -+- 020 +- 12 -+- 120 # +- 021 +- 121 # parent_by_rp = {"0": None, "00": "0", "000": "00", "01": "0", "010": "01", "011": "01", "02": "0", "020": "02", "021": "02", "1": None, "10": "1", "100": "10", "11": "1", "110": "11", "111": "11", "12": "1", "120": "12", "121": "12"} same_subtree = [ set(["0", "00", "01"]), set(["01", "010"]), set(["02", "020", "021"]), set(["02", "020", "021"]), set(["0", "02", "010"]), set(["000"]) ] different_subtree = [ set(["10", "11"]), set(["110", "111"]), set(["10", "11", "110"]), set(["12", "120", "100"]), set(["0", "1"]), ] for group in same_subtree: self.assertTrue( ac_obj._check_same_subtree(group, parent_by_rp)) for group in different_subtree: self.assertFalse( ac_obj._check_same_subtree(group, parent_by_rp)) @mock.patch('placement.objects.research_context._has_provider_trees', new=mock.Mock(return_value=True)) def _test_generate_areq_list(self, strategy, expected_candidates): self.conf_fixture.conf.set_override( "allocation_candidates_generation_strategy", strategy, group="placement") rw_ctx = res_ctx.RequestWideSearchContext( self.context, placement_lib.RequestWideParams(), True) areq_lists_by_anchor = { "root1": { "": ["r1A", "r1B",], "group1": ["r1g1A", "r1g1B",], }, "root2": { "": ["r2A"], "group1": ["r2g1A", "r2g1B"], }, "root3": { "": ["r3A"], }, } generator = ac_obj._generate_areq_lists( rw_ctx, areq_lists_by_anchor, {"", "group1"}) self.assertEqual(expected_candidates, list(generator)) def test_generate_areq_lists_depth_first(self): # Depth-first will generate all root1 candidates first then root2, # root3 is ignored as it has no candidate for group1. expected_candidates = [ ('r1A', 'r1g1A'), ('r1A', 'r1g1B'), ('r1B', 'r1g1A'), ('r1B', 'r1g1B'), ('r2A', 'r2g1A'), ('r2A', 'r2g1B'), ] self._test_generate_areq_list("depth-first", expected_candidates) @mock.patch('placement.objects.research_context._has_provider_trees', new=mock.Mock(return_value=True)) def test_generate_areq_lists_breadth_first(self): # Breadth-first will take one candidate from root1 then root2 then goes # back to root1 etc. Root2 runs out of candidates earlier than root1 so # the last two candidates are both from root1. The root3 is still # ignored as it has no candidates for group1. expected_candidates = [ ('r1A', 'r1g1A'), ('r2A', 'r2g1A'), ('r1A', 'r1g1B'), ('r2A', 'r2g1B'), ('r1B', 'r1g1A'), ('r1B', 'r1g1B') ] self._test_generate_areq_list("breadth-first", expected_candidates) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/test_inventory.py0000664000175000017500000001130000000000000027020 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock import os_resource_classes as orc from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from placement.objects import inventory from placement.objects import resource_provider from placement.tests.unit.objects import base _RESOURCE_CLASS_NAME = 'DISK_GB' _RESOURCE_CLASS_ID = 2 _RESOURCE_PROVIDER_ID = 1 _RESOURCE_PROVIDER_UUID = uuids.resource_provider VCPU_ID = orc.STANDARDS.index( orc.VCPU) _INVENTORY_ID = 2 _INVENTORY_DB = { 'id': _INVENTORY_ID, 'resource_provider_id': _RESOURCE_PROVIDER_ID, 'resource_class_id': _RESOURCE_CLASS_ID, 'total': 16, 'reserved': 2, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 1.0, 'updated_at': None, 'created_at': timeutils.utcnow(with_timezone=True), } class TestInventoryNoDB(base.TestCase): @mock.patch('placement.objects.inventory._get_inventory_by_provider_id') def test_get_all_by_resource_provider(self, mock_get): expected = [dict(_INVENTORY_DB, resource_provider_id=_RESOURCE_PROVIDER_ID), dict(_INVENTORY_DB, id=_INVENTORY_DB['id'] + 1, resource_provider_id=_RESOURCE_PROVIDER_ID)] mock_get.return_value = expected rp = resource_provider.ResourceProvider(self.context, id=_RESOURCE_PROVIDER_ID, uuid=_RESOURCE_PROVIDER_UUID) objs = inventory.get_all_by_resource_provider(self.context, rp) self.assertEqual(2, len(objs)) self.assertEqual(_INVENTORY_DB['id'], objs[0].id) self.assertEqual(_INVENTORY_DB['id'] + 1, objs[1].id) self.assertEqual(_RESOURCE_PROVIDER_ID, objs[0].resource_provider.id) def test_set_defaults(self): rp = resource_provider.ResourceProvider(self.context, id=_RESOURCE_PROVIDER_ID, uuid=_RESOURCE_PROVIDER_UUID) kwargs = dict(resource_provider=rp, resource_class=_RESOURCE_CLASS_NAME, total=16) inv = inventory.Inventory(self.context, **kwargs) self.assertEqual(0, inv.reserved) self.assertEqual(1, inv.min_unit) self.assertEqual(1, inv.max_unit) self.assertEqual(1, inv.step_size) self.assertEqual(1.0, inv.allocation_ratio) def test_capacity(self): rp = resource_provider.ResourceProvider(self.context, id=_RESOURCE_PROVIDER_ID, uuid=_RESOURCE_PROVIDER_UUID) kwargs = dict(resource_provider=rp, resource_class=_RESOURCE_CLASS_NAME, total=16, reserved=16) inv = inventory.Inventory(self.context, **kwargs) self.assertEqual(0, inv.capacity) inv.reserved = 15 self.assertEqual(1, inv.capacity) inv.allocation_ratio = 2.0 self.assertEqual(2, inv.capacity) class TestListOfInventory(base.TestCase): def test_find(self): rp = resource_provider.ResourceProvider( self.context, uuid=uuids.rp_uuid) inv_list = [ inventory.Inventory( resource_provider=rp, resource_class=orc.VCPU, total=24), inventory.Inventory( resource_provider=rp, resource_class=orc.MEMORY_MB, total=10240), ] found = inventory.find(inv_list, orc.MEMORY_MB) self.assertIsNotNone(found) self.assertEqual(10240, found.total) found = inventory.find(inv_list, orc.VCPU) self.assertIsNotNone(found) self.assertEqual(24, found.total) found = inventory.find(inv_list, orc.DISK_GB) self.assertIsNone(found) # Try an integer resource class identifier... self.assertRaises(ValueError, inventory.find, inv_list, VCPU_ID) # Use an invalid string... self.assertIsNone(inventory.find(inv_list, 'HOUSE')) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/test_resource_class.py0000664000175000017500000000235500000000000030011 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from placement import exception from placement.objects import resource_class from placement.tests.unit.objects import base class TestResourceClass(base.TestCase): def test_cannot_create_with_id(self): rc = resource_class.ResourceClass(self.context, id=1, name='CUSTOM_IRON_NFV') exc = self.assertRaises(exception.ObjectActionError, rc.create) self.assertIn('already created', str(exc)) def test_cannot_create_requires_name(self): rc = resource_class.ResourceClass(self.context) exc = self.assertRaises(exception.ObjectActionError, rc.create) self.assertIn('name is required', str(exc)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/test_resource_provider.py0000664000175000017500000000653100000000000030536 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils.fixture import uuidsentinel as uuids from oslo_utils import timeutils from placement import exception from placement.objects import resource_provider from placement.tests.unit.objects import base _RESOURCE_CLASS_ID = 2 _RESOURCE_PROVIDER_ID = 1 _RESOURCE_PROVIDER_UUID = uuids.resource_provider _RESOURCE_PROVIDER_NAME = str(uuids.resource_name) _RESOURCE_PROVIDER_DB = { 'id': _RESOURCE_PROVIDER_ID, 'uuid': _RESOURCE_PROVIDER_UUID, 'name': _RESOURCE_PROVIDER_NAME, 'generation': 0, 'root_provider_uuid': _RESOURCE_PROVIDER_UUID, 'parent_provider_uuid': None, 'updated_at': None, 'created_at': timeutils.utcnow(with_timezone=True), } _RESOURCE_PROVIDER_ID2 = 2 _RESOURCE_PROVIDER_UUID2 = uuids.resource_provider2 _RESOURCE_PROVIDER_NAME2 = uuids.resource_name2 _RESOURCE_PROVIDER_DB2 = { 'id': _RESOURCE_PROVIDER_ID2, 'uuid': _RESOURCE_PROVIDER_UUID2, 'name': _RESOURCE_PROVIDER_NAME2, 'generation': 0, 'root_provider_uuid': _RESOURCE_PROVIDER_UUID, 'parent_provider_uuid': _RESOURCE_PROVIDER_UUID, } _ALLOCATION_ID = 2 _ALLOCATION_DB = { 'id': _ALLOCATION_ID, 'resource_provider_id': _RESOURCE_PROVIDER_ID, 'resource_class_id': _RESOURCE_CLASS_ID, 'consumer_uuid': uuids.fake_instance, 'consumer_id': 1, 'consumer_generation': 0, 'used': 8, 'user_id': 1, 'user_external_id': uuids.user_id, 'project_id': 1, 'project_external_id': uuids.project_id, 'updated_at': timeutils.utcnow(with_timezone=True), 'created_at': timeutils.utcnow(with_timezone=True), } _ALLOCATION_BY_CONSUMER_DB = { 'id': _ALLOCATION_ID, 'resource_provider_id': _RESOURCE_PROVIDER_ID, 'resource_class_id': _RESOURCE_CLASS_ID, 'consumer_uuid': uuids.fake_instance, 'consumer_id': 1, 'consumer_generation': 0, 'used': 8, 'user_id': 1, 'user_external_id': uuids.user_id, 'project_id': 1, 'project_external_id': uuids.project_id, 'updated_at': timeutils.utcnow(with_timezone=True), 'created_at': timeutils.utcnow(with_timezone=True), 'resource_provider_name': _RESOURCE_PROVIDER_NAME, 'resource_provider_uuid': _RESOURCE_PROVIDER_UUID, 'resource_provider_generation': 0, } class TestResourceProviderNoDB(base.TestCase): def test_create_id_fail(self): obj = resource_provider.ResourceProvider(context=self.context, uuid=_RESOURCE_PROVIDER_UUID, id=_RESOURCE_PROVIDER_ID) self.assertRaises(exception.ObjectActionError, obj.create) def test_create_no_uuid_fail(self): obj = resource_provider.ResourceProvider(context=self.context) self.assertRaises(exception.ObjectActionError, obj.create) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/test_rp_candidates.py0000664000175000017500000000731300000000000027574 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from placement.objects import rp_candidates class TestRPCandidateList(testtools.TestCase): def setUp(self): super(TestRPCandidateList, self).setUp() self.rp_candidates = rp_candidates.RPCandidateList() self.rps_rc1 = set([ ('rp1', 'root1'), ('rp2', 'root1'), ('ss1', 'root1'), ('rp3', 'root'), ('ss1', 'root')]) self.rp_candidates.add_rps(self.rps_rc1, 'rc_1') def test_property(self): expected_rpsinfo = set([('rp1', 'root1', 'rc_1'), ('rp2', 'root1', 'rc_1'), ('ss1', 'root1', 'rc_1'), ('rp3', 'root', 'rc_1'), ('ss1', 'root', 'rc_1')]) self.assertEqual(expected_rpsinfo, self.rp_candidates.rps_info) expected_rps = set(['rp1', 'rp2', 'rp3', 'ss1']) expected_trees = set(['root1', 'root']) expected_allrps = expected_rps | expected_trees self.assertEqual(expected_rps, self.rp_candidates.rps) self.assertEqual(expected_trees, self.rp_candidates.trees) self.assertEqual(expected_allrps, self.rp_candidates.all_rps) def test_filter_by_tree(self): self.rp_candidates.filter_by_tree(set(['root1'])) expected_rpsinfo = set([('rp1', 'root1', 'rc_1'), ('rp2', 'root1', 'rc_1'), ('ss1', 'root1', 'rc_1')]) self.assertEqual(expected_rpsinfo, self.rp_candidates.rps_info) def test_filter_by_rp(self): self.rp_candidates.filter_by_rp(set([('ss1', 'root1')])) expected_rpsinfo = set([('ss1', 'root1', 'rc_1')]) self.assertEqual(expected_rpsinfo, self.rp_candidates.rps_info) def test_filter_by_rp_or_tree(self): self.rp_candidates.filter_by_rp_or_tree(set(['ss1', 'root1'])) # we get 'ss1' and rps under 'root1' expected_rpsinfo = set([('ss1', 'root', 'rc_1'), ('ss1', 'root1', 'rc_1'), ('rp1', 'root1', 'rc_1'), ('rp2', 'root1', 'rc_1')]) self.assertEqual(expected_rpsinfo, self.rp_candidates.rps_info) def test_merge_common_trees(self): merge_candidates = rp_candidates.RPCandidateList() rps_rc2 = set([('rp1', 'root2'), ('rp4', 'root2'), ('ss1', 'root2'), ('rp5', 'root'), ('ss1', 'root')]) merge_candidates.add_rps(rps_rc2, 'rc_2') self.rp_candidates.merge_common_trees(merge_candidates) # we get only rps under 'root' since it's only the common tree expected_rpsinfo = set([('rp3', 'root', 'rc_1'), ('rp5', 'root', 'rc_2'), ('ss1', 'root', 'rc_1'), ('ss1', 'root', 'rc_2')]) self.assertEqual(expected_rpsinfo, self.rp_candidates.rps_info) # make sure merging empty candidates doesn't change anything empty_candidates = rp_candidates.RPCandidateList() self.rp_candidates.merge_common_trees(empty_candidates) self.assertEqual(expected_rpsinfo, self.rp_candidates.rps_info) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/test_trait.py0000664000175000017500000000177500000000000026125 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from placement.objects import trait from placement.tests.unit.objects import base class TestTraits(base.TestCase): @mock.patch('placement.objects.trait._trait_sync') def test_sync_flag(self, mock_sync): synced = trait._TRAITS_SYNCED self.assertFalse(synced) # Sync the traits trait.ensure_sync(self.context) synced = trait._TRAITS_SYNCED self.assertTrue(synced) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/objects/test_usage.py0000664000175000017500000000170600000000000026100 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import decimal import os_resource_classes as orc import testtools from placement.objects import usage class TestUsageNoDB(testtools.TestCase): def test_decimal_to_int(self): dmal = decimal.Decimal('10') usage_obj = usage.Usage(resource_class=orc.VCPU, usage=dmal) # Type must come second in assertIsInstance. self.assertIsInstance(usage_obj.usage, int) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/policy_fixture.py0000664000175000017500000000427200000000000025352 0ustar00zuulzuul00000000000000# Copyright 2012 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import fixtures from oslo_policy import policy as oslo_policy from placement.conf import paths from placement import policies from placement import policy as placement_policy class PolicyFixture(fixtures.Fixture): def __init__(self, conf_fixture): self.conf_fixture = conf_fixture super(PolicyFixture, self).__init__() """Load the default placement policy for tests.""" def setUp(self): super(PolicyFixture, self).setUp() policy_file = paths.state_path_def('etc/placement/policy.yaml') self.conf_fixture.config(group='oslo_policy', policy_file=policy_file) placement_policy.reset() # because oslo.policy has a nasty habit of modifying the default rules # we provide, we must pass a copy of the rules rather then the rules # themselves placement_policy.init( self.conf_fixture.conf, suppress_deprecation_warnings=True, rules=copy.deepcopy(policies.list_rules())) self.addCleanup(placement_policy.reset) @staticmethod def set_rules(rules, overwrite=True): """Set placement policy rules. .. note:: The rules must first be registered via the Enforcer.register_defaults method. :param rules: dict of action=rule mappings to set :param overwrite: Whether to overwrite current rules or update them with the new rules. """ enforcer = placement_policy.get_enforcer() enforcer.set_rules(oslo_policy.Rules.from_dict(rules), overwrite=overwrite) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_auth.py0000664000175000017500000000567000000000000024310 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the auth middleware used by the Placement service. Most of the functionality of the auth middleware is tested in functional and integration tests but sometimes it is more convenient or accurate to use unit tests. """ from keystonemiddleware import auth_token from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_policy import opts as policy_opts import testtools import webob from placement import conf from placement import deploy class RootNoAuth(testtools.TestCase): """Confirm that no auth is required for accessing root.""" def setUp(self): """Establish config defaults for middlewares.""" super(RootNoAuth, self).setUp() config = cfg.ConfigOpts() conf_fixture = self.useFixture(config_fixture.Config(config)) conf.register_opts(conf_fixture.conf) auth_token_opts = auth_token.AUTH_TOKEN_OPTS[0][1] conf_fixture.register_opts(auth_token_opts, group='keystone_authtoken') www_authenticate_uri = 'http://example.com/identity' conf_fixture.config( www_authenticate_uri=www_authenticate_uri, group='keystone_authtoken') # ensure that the auth_token middleware is chosen conf_fixture.config(auth_strategy='keystone', group='api') # register and default policy opts (referenced by deploy) policy_opts.set_defaults(conf_fixture.conf) self.conf = conf_fixture.conf self.app = deploy.deploy(self.conf) def _test_root_req(self, req): # set no environ on req, thus no auth req.environ['REMOTE_ADDR'] = '127.0.0.1' response = req.get_response(self.app) data = response.json_body self.assertEqual('CURRENT', data['versions'][0]['status']) def test_slash_no_auth(self): """Accessing / requires no auth.""" req = webob.Request.blank('/', method='GET') self._test_root_req(req) def test_no_slash_no_auth(self): """Accessing '' requires no auth.""" req = webob.Request.blank('', method='GET') self._test_root_req(req) def test_auth_elsewhere(self): """Make sure auth is happening.""" req = webob.Request.blank('/resource_providers', method='GET') req.environ['REMOTE_ADDR'] = '127.0.0.1' response = req.get_response(self.app) self.assertEqual('401 Unauthorized', response.status) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_context.py0000664000175000017500000000570600000000000025033 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from placement import context from placement import exception from placement.tests.unit import base class TestPlacementRequestContext(base.ContextTestCase): """Test cases for PlacementRequestContext.""" def setUp(self): super(TestPlacementRequestContext, self).setUp() self.ctxt = context.RequestContext(user_id='fake', project_id='fake') self.default_target = {'user_id': self.ctxt.user_id, 'project_id': self.ctxt.project_id} @mock.patch('placement.policy.authorize', return_value=True) def test_can_target_none_fatal_true_accept(self, mock_authorize): self.assertTrue(self.ctxt.can('placement:resource_providers:list')) mock_authorize.assert_called_once_with( self.ctxt, 'placement:resource_providers:list', self.default_target) @mock.patch('placement.policy.authorize', side_effect=exception.PolicyNotAuthorized( action='placement:resource_providers:list')) def test_can_target_none_fatal_true_reject(self, mock_authorize): self.assertRaises(exception.PolicyNotAuthorized, self.ctxt.can, 'placement:resource_providers:list') mock_authorize.assert_called_once_with( self.ctxt, 'placement:resource_providers:list', self.default_target) @mock.patch('placement.policy.authorize', side_effect=exception.PolicyNotAuthorized( action='placement:resource_providers:list')) def test_can_target_none_fatal_false_reject(self, mock_authorize): self.assertFalse(self.ctxt.can('placement:resource_providers:list', fatal=False)) mock_authorize.assert_called_once_with( self.ctxt, 'placement:resource_providers:list', self.default_target) @mock.patch('placement.policy.authorize', return_value=True) def test_can_target_none_fatal_true_accept_custom_target( self, mock_authorize): class MyObj(object): user_id = project_id = 'fake2' target = MyObj() self.assertTrue(self.ctxt.can('placement:resource_providers:list', target=target)) mock_authorize.assert_called_once_with( self.ctxt, 'placement:resource_providers:list', target) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_db_api.py0000664000175000017500000000375000000000000024562 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from unittest import mock from oslo_config import cfg from oslo_config import fixture as config_fixture import testtools from placement import conf from placement import db_api class DbApiTests(testtools.TestCase): def setUp(self): super(DbApiTests, self).setUp() config = cfg.ConfigOpts() self.conf_fixture = self.useFixture(config_fixture.Config(config)) conf.register_opts(self.conf_fixture.conf) db_api.configure.reset() @mock.patch.object(db_api.placement_context_manager, "configure") def test_can_call_configure_twice(self, configure_mock): """This test asserts that configure can be safely called twice which may happen if placement is run under mod_wsgi and the wsgi application is reloaded. """ db_api.configure(self.conf_fixture.conf) configure_mock.assert_called_once() # a second invocation of configure on a transaction context # should raise an exception so mock this and assert its not # called on a second invocation of db_api's configure function configure_mock.side_effect = TypeError() db_api.configure(self.conf_fixture.conf) # Note we have not reset the mock so it should # have been called once from the first invocation of # db_api.configure and the second invocation should not # have called it again configure_mock.assert_called_once() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_db_conf.py0000664000175000017500000000252000000000000024730 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools from oslo_config import cfg from oslo_config import fixture as config_fixture from placement import conf class TestPlacementDBConf(testtools.TestCase): """Test cases for Placement DB Setup.""" def setUp(self): super(TestPlacementDBConf, self).setUp() config = cfg.ConfigOpts() self.conf_fixture = self.useFixture(config_fixture.Config(config)) conf.register_opts(config) def test_missing_config_raises(self): """Not setting [placement_database]/connection is an error.""" exc = self.assertRaises( cfg.RequiredOptError, self.conf_fixture.conf, [], default_config_files=[]) self.assertIn( 'option connection in group [placement_database]', str(exc)) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_deploy.py0000664000175000017500000000450300000000000024635 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the deploy function used to build the Placement service.""" from keystonemiddleware import auth_token from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_policy import opts as policy_opts import testtools import webob from placement import conf from placement import deploy class DeployTest(testtools.TestCase): def test_auth_middleware_factory(self): """Make sure that configuration settings make their way to the keystone middleware correctly. """ config = cfg.ConfigOpts() conf_fixture = self.useFixture(config_fixture.Config(config)) conf.register_opts(conf_fixture.conf) # NOTE(cdent): There appears to be no simple way to get the list of # options used by the auth_token middleware. So we pull from an # existing data structure. auth_token_opts = auth_token.AUTH_TOKEN_OPTS[0][1] conf_fixture.register_opts(auth_token_opts, group='keystone_authtoken') www_authenticate_uri = 'http://example.com/identity' conf_fixture.config( www_authenticate_uri=www_authenticate_uri, group='keystone_authtoken') # ensure that the auth_token middleware is chosen conf_fixture.config(auth_strategy='keystone', group='api') # register and default policy opts (referenced by deploy) policy_opts.set_defaults(conf_fixture.conf) app = deploy.deploy(conf_fixture.conf) req = webob.Request.blank('/resource_providers', method="GET") response = req.get_response(app) auth_header = response.headers['www-authenticate'] self.assertIn(www_authenticate_uri, auth_header) self.assertIn('keystone uri=', auth_header.lower()) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_fault_wrap.py0000664000175000017500000000430400000000000025504 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for the placement fault wrap middleware.""" from unittest import mock from oslo_serialization import jsonutils import testtools import webob from placement import fault_wrap ERROR_MESSAGE = 'that was not supposed to happen' class Fault(Exception): pass class TestFaultWrapper(testtools.TestCase): @staticmethod @webob.dec.wsgify def failing_application(req): raise Fault(ERROR_MESSAGE) def setUp(self): super(TestFaultWrapper, self).setUp() self.req = webob.Request.blank('/') self.environ = self.req.environ self.environ['HTTP_ACCEPT'] = 'application/json' self.start_response_mock = mock.MagicMock() self.fail_app = fault_wrap.FaultWrapper(self.failing_application) def test_fault_is_wrapped(self): response = self.fail_app(self.environ, self.start_response_mock) # response is a single member list error_struct = jsonutils.loads(response[0]) first_error = error_struct['errors'][0] self.assertIn(ERROR_MESSAGE, first_error['detail']) self.assertEqual(500, first_error['status']) self.assertEqual('Internal Server Error', first_error['title']) def test_fault_response_headers(self): self.fail_app(self.environ, self.start_response_mock) call_args = self.start_response_mock.call_args self.assertEqual('500 Internal Server Error', call_args[0][0]) @mock.patch("placement.fault_wrap.LOG") def test_fault_log(self, mocked_log): self.fail_app(self.environ, self.start_response_mock) mocked_log.exception.assert_called_once_with( 'Placement API unexpected error: %s', mock.ANY) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_handler.py0000664000175000017500000001623500000000000024763 0ustar00zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the functions used by the placement API handlers.""" from unittest import mock import microversion_parse from oslo_utils.fixture import uuidsentinel import routes import testtools import webob from placement import handler from placement.handlers import root from placement import microversion # Used in tests below def start_response(*args, **kwargs): pass def _environ(path='/moo', method='GET'): return { 'PATH_INFO': path, 'REQUEST_METHOD': method, 'SERVER_NAME': 'example.com', 'SERVER_PORT': '80', 'wsgi.url_scheme': 'http', # The microversion version value is not used, but it # needs to be set to avoid a KeyError. microversion.MICROVERSION_ENVIRON: microversion_parse.Version(1, 12), } class DispatchTest(testtools.TestCase): def setUp(self): super(DispatchTest, self).setUp() self.mapper = routes.Mapper() self.route_handler = mock.MagicMock() def test_no_match_null_map(self): self.assertRaises(webob.exc.HTTPNotFound, handler.dispatch, _environ(), start_response, self.mapper) def test_no_match_with_map(self): self.mapper.connect('/foobar', action='hello') self.assertRaises(webob.exc.HTTPNotFound, handler.dispatch, _environ(), start_response, self.mapper) def test_simple_match(self): self.mapper.connect('/foobar', action=self.route_handler, conditions=dict(method=['GET'])) environ = _environ(path='/foobar') handler.dispatch(environ, start_response, self.mapper) self.route_handler.assert_called_with(environ, start_response) def test_simple_match_routing_args(self): self.mapper.connect('/foobar/{id}', action=self.route_handler, conditions=dict(method=['GET'])) environ = _environ(path='/foobar/%s' % uuidsentinel.foobar) handler.dispatch(environ, start_response, self.mapper) self.route_handler.assert_called_with(environ, start_response) self.assertEqual(uuidsentinel.foobar, environ['wsgiorg.routing_args'][1]['id']) class MapperTest(testtools.TestCase): def setUp(self): super(MapperTest, self).setUp() declarations = { '/hello': {'GET': 'hello'} } self.mapper = handler.make_map(declarations) def test_no_match(self): environ = _environ(path='/cow') self.assertIsNone(self.mapper.match(environ=environ)) def test_match(self): environ = _environ(path='/hello') action = self.mapper.match(environ=environ)['action'] self.assertEqual('hello', action) def test_405_methods(self): environ = _environ(path='/hello', method='POST') result = self.mapper.match(environ=environ) self.assertEqual(handler.handle_405, result['action']) self.assertEqual('GET', result['_methods']) def test_405_headers(self): environ = _environ(path='/hello', method='POST') global headers, status headers = status = None def local_start_response(*args, **kwargs): global headers, status status = args[0] headers = {header[0]: header[1] for header in args[1]} handler.dispatch(environ, local_start_response, self.mapper) allow_header = headers['allow'] self.assertEqual('405 Method Not Allowed', status) self.assertEqual('GET', allow_header) # PEP 3333 requires that headers be whatever the native str # is in that version of Python. Never unicode. self.assertEqual(str, type(allow_header)) class PlacementLoggingTest(testtools.TestCase): @mock.patch("placement.handler.LOG") def test_404_no_error_log(self, mocked_log): environ = _environ(path='/hello', method='GET') config = mock.MagicMock() context_mock = mock.Mock() context_mock.to_policy_values.return_value = {'roles': ['admin']} environ['placement.context'] = context_mock app = handler.PlacementHandler(config=config) self.assertRaises(webob.exc.HTTPNotFound, app, environ, start_response) mocked_log.error.assert_not_called() mocked_log.exception.assert_not_called() class DeclarationsTest(testtools.TestCase): def setUp(self): super(DeclarationsTest, self).setUp() self.mapper = handler.make_map(handler.ROUTE_DECLARATIONS) def test_root_slash_match(self): environ = _environ(path='/') result = self.mapper.match(environ=environ) self.assertEqual(root.home, result['action']) def test_root_empty_match(self): environ = _environ(path='') result = self.mapper.match(environ=environ) self.assertEqual(root.home, result['action']) class ContentHeadersTest(testtools.TestCase): def setUp(self): super(ContentHeadersTest, self).setUp() self.environ = _environ(path='/') config = mock.MagicMock() self.environ['placement.context'] = mock.MagicMock() self.app = handler.PlacementHandler(config=config) def test_no_content_type(self): self.environ['CONTENT_LENGTH'] = '10' self.assertRaisesRegex(webob.exc.HTTPBadRequest, "content-type header required when " "content-length > 0", self.app, self.environ, start_response) def test_non_integer_content_length(self): self.environ['CONTENT_LENGTH'] = 'foo' self.assertRaisesRegex(webob.exc.HTTPBadRequest, "content-length header must be an integer", self.app, self.environ, start_response) def test_empty_content_type(self): self.environ['CONTENT_LENGTH'] = '10' self.environ['CONTENT_TYPE'] = '' self.assertRaisesRegex(webob.exc.HTTPBadRequest, "content-type header required when " "content-length > 0", self.app, self.environ, start_response) def test_empty_content_length_and_type_works(self): self.environ['CONTENT_LENGTH'] = '' self.environ['CONTENT_TYPE'] = '' self.app(self.environ, start_response) def test_content_length_and_type_works(self): self.environ['CONTENT_LENGTH'] = '10' self.environ['CONTENT_TYPE'] = 'foo' self.app(self.environ, start_response) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_microversion.py0000664000175000017500000001357500000000000026071 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for placement microversion handling.""" import collections import operator from unittest import mock import microversion_parse import testtools import webob from placement import microversion def handler(): return True class TestMicroversionFindMethod(testtools.TestCase): def test_method_405(self): self.assertRaises( webob.exc.HTTPMethodNotAllowed, microversion._find_method, microversion._fully_qualified_name(handler), '1.1', 405) def test_method_404(self): self.assertRaises( webob.exc.HTTPNotFound, microversion._find_method, microversion._fully_qualified_name(handler), '1.1', 404) class TestMicroversionDecoration(testtools.TestCase): @mock.patch('placement.microversion.VERSIONED_METHODS', new=collections.defaultdict(list)) def test_methods_structure(self): """Test that VERSIONED_METHODS gets data as expected.""" self.assertEqual(0, len(microversion.VERSIONED_METHODS)) fully_qualified_method = microversion._fully_qualified_name( handler) microversion.version_handler('1.1', '1.10')(handler) microversion.version_handler('2.0')(handler) methods_data = microversion.VERSIONED_METHODS[fully_qualified_method] stored_method_data = methods_data[-1] self.assertEqual(2, len(methods_data)) self.assertEqual(microversion_parse.Version(1, 1), stored_method_data[0]) self.assertEqual(microversion_parse.Version(1, 10), stored_method_data[1]) self.assertEqual(handler, stored_method_data[2]) self.assertEqual(microversion_parse.Version(2, 0), methods_data[0][0]) def test_version_handler_float_exception(self): self.assertRaises(TypeError, microversion.version_handler(1.1), handler) def test_version_handler_nan_exception(self): self.assertRaises(TypeError, microversion.version_handler('cow'), handler) def test_version_handler_tuple_exception(self): self.assertRaises(TypeError, microversion.version_handler((1, 1)), handler) class TestMicroversionIntersection(testtools.TestCase): """Test that there are no overlaps in the versioned handlers.""" # If you add versioned handlers you need to update this value to # reflect the change. The value is the total number of methods # with different names, not the total number overall. That is, # if you add two different versions of method 'foobar' the # number only goes up by one if no other version foobar yet # exists. This operates as a simple sanity check. TOTAL_VERSIONED_METHODS = 20 def test_methods_versioned(self): methods_data = microversion.VERSIONED_METHODS self.assertEqual(self.TOTAL_VERSIONED_METHODS, len(methods_data)) @staticmethod def _check_intersection(method_info): # See check_for_versions_intersection in # wsgi. pairs = [] counter = 0 for min_ver, max_ver, func in method_info: pairs.append((min_ver, 1, func)) pairs.append((max_ver, -1, func)) pairs.sort(key=operator.itemgetter(0)) for p in pairs: counter += p[1] if counter > 1: return True return False @mock.patch('placement.microversion.VERSIONED_METHODS', new=collections.defaultdict(list)) def test_faked_intersection(self): microversion.version_handler('1.0', '1.9')(handler) microversion.version_handler('1.8', '2.0')(handler) for method_info in microversion.VERSIONED_METHODS.values(): self.assertTrue(self._check_intersection(method_info)) @mock.patch('placement.microversion.VERSIONED_METHODS', new=collections.defaultdict(list)) def test_faked_non_intersection(self): microversion.version_handler('1.0', '1.8')(handler) microversion.version_handler('1.9', '2.0')(handler) for method_info in microversion.VERSIONED_METHODS.values(): self.assertFalse(self._check_intersection(method_info)) def test_check_real_for_intersection(self): """Check the real handlers to make sure there is no intersctions.""" for method_name, method_info in microversion.VERSIONED_METHODS.items(): self.assertFalse( self._check_intersection(method_info), 'method %s has intersecting versioned handlers' % method_name) class MicroversionSequentialTest(testtools.TestCase): def test_microversion_sequential(self): for method_name, method_list in microversion.VERSIONED_METHODS.items(): previous_min_version = method_list[0][0] for method in method_list[1:]: previous_min_version = microversion_parse.parse_version_string( '%s.%s' % (previous_min_version.major, previous_min_version.minor - 1)) self.assertEqual( previous_min_version, method[1], "The microversions aren't sequential in the method %s" % method_name) previous_min_version = method[0] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_policy.py0000664000175000017500000001024500000000000024640 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import fixtures from oslo_config import cfg from oslo_config import fixture as config_fixture from oslo_policy import policy as oslo_policy from placement import conf from placement import context from placement import exception from placement import policy from placement.tests.unit import base from placement.tests.unit import policy_fixture class PlacementPolicyTestCase(base.ContextTestCase): """Tests interactions with placement policy.""" def setUp(self): super(PlacementPolicyTestCase, self).setUp() config = cfg.ConfigOpts() self.conf_fixture = self.useFixture(config_fixture.Config(config)) conf.register_opts(config) self.ctxt = context.RequestContext(user_id='fake', project_id='fake') self.target = {'user_id': 'fake', 'project_id': 'fake'} # A value is required in the database connection opt for conf to # parse. self.conf_fixture.config(connection='stub', group='placement_database') config([], default_config_files=[]) self.ctxt.config = config policy.reset() self.addCleanup(policy.reset) def test_modified_policy_reloads(self): """Creates a temporary policy.yaml file and tests authorizations against a fake rule between updates to the physical policy file. """ tempdir = self.useFixture(fixtures.TempDir()) tmpfilename = os.path.join(tempdir.path, 'policy.yaml') self.conf_fixture.config( group='oslo_policy', policy_file=tmpfilename) action = 'placement:test' # Load the default action and rule (defaults to "any"). enforcer = policy._get_enforcer(self.conf_fixture.conf) rule = oslo_policy.RuleDefault(action, '') enforcer.register_default(rule) # Now auth should work because the action is registered and anyone # can perform the action. policy.authorize(self.ctxt, action, self.target) # Now update the policy file and reload it to disable the action # from all users. with open(tmpfilename, "w") as policyfile: policyfile.write('"%s": "!"' % action) enforcer.load_rules(force_reload=True) self.assertRaises(exception.PolicyNotAuthorized, policy.authorize, self.ctxt, action, self.target) def test_authorize_do_raise_false(self): """Tests that authorize does not raise an exception when the check fails. """ fixture = self.useFixture( policy_fixture.PolicyFixture(self.conf_fixture)) # It doesn't matter which policy we use here so long as it's # registered. policy_name = 'placement:resource_providers:list' fixture.set_rules({policy_name: '!'}) self.assertFalse( policy.authorize( self.ctxt, policy_name, self.target, do_raise=False)) def test_init_pick_policy_file_from_oslo_config_option(self): """Tests a scenario where the oslo policy enforcer in init pick the policy file set in [oslo_policy]/policy_file config option. """ tempdir = self.useFixture(fixtures.TempDir()) tmpfilename = os.path.join(tempdir.path, 'policy.yaml') self.conf_fixture.config(group='oslo_policy', policy_file=tmpfilename) # Create the [oslo_policy]/policy_file. with open(tmpfilename, "w") as policyfile: policyfile.write('# Assume upgrade with existing custom policy.') config = self.conf_fixture.conf enforcer = policy._get_enforcer(config) self.assertEqual(config.oslo_policy.policy_file, enforcer.policy_file) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_requestlog.py0000664000175000017500000000562100000000000025535 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for the placement request log middleware.""" from unittest import mock import testtools import webob from placement import requestlog class TestRequestLog(testtools.TestCase): @staticmethod @webob.dec.wsgify def application(req): req.response.status = 200 return req.response def setUp(self): super(TestRequestLog, self).setUp() self.req = webob.Request.blank('/resource_providers?name=myrp') self.environ = self.req.environ # The blank does not include remote address, so add it. self.environ['REMOTE_ADDR'] = '127.0.0.1' # nor a microversion self.environ['placement.microversion'] = '2.1' def test_get_uri(self): req_uri = requestlog.RequestLog._get_uri(self.environ) self.assertEqual('/resource_providers?name=myrp', req_uri) def test_get_uri_knows_prefix(self): self.environ['SCRIPT_NAME'] = '/placement' req_uri = requestlog.RequestLog._get_uri(self.environ) self.assertEqual('/placement/resource_providers?name=myrp', req_uri) @mock.patch("placement.requestlog.RequestLog.write_log") @mock.patch("placement.requestlog.LOG") def test_middleware_writes_logs(self, mocked_log, write_log): mocked_log.isEnabledFor.return_value = True start_response_mock = mock.MagicMock() app = requestlog.RequestLog(self.application) app(self.environ, start_response_mock) write_log.assert_called_once_with( self.environ, '/resource_providers?name=myrp', '200 OK', '0') @mock.patch("placement.requestlog.LOG") def test_middleware_sends_message(self, mocked_log): start_response_mock = mock.MagicMock() app = requestlog.RequestLog(self.application) app(self.environ, start_response_mock) mocked_log.debug.assert_called_once_with( 'Starting request: %s "%s %s"', '127.0.0.1', 'GET', '/resource_providers?name=myrp') mocked_log.info.assert_called_once_with( '%(REMOTE_ADDR)s "%(REQUEST_METHOD)s %(REQUEST_URI)s" ' 'status: %(status)s len: %(bytes)s microversion: %(microversion)s', {'microversion': '2.1', 'status': '200', 'REQUEST_URI': '/resource_providers?name=myrp', 'REQUEST_METHOD': 'GET', 'REMOTE_ADDR': '127.0.0.1', 'bytes': '0'}) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/tests/unit/test_util.py0000664000175000017500000014622700000000000024330 0ustar00zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the utility functions used by the placement API.""" import datetime from unittest import mock import fixtures import microversion_parse from oslo_middleware import request_id from oslo_utils.fixture import uuidsentinel from oslo_utils import timeutils import testtools import webob import placement from placement import context from placement import lib as pl from placement import microversion from placement.objects import resource_class as rc_obj from placement.objects import resource_provider as rp_obj from placement.tests.unit import base from placement import util from placement.util import roundrobin class TestCheckAccept(testtools.TestCase): """Confirm behavior of util.check_accept.""" @staticmethod @util.check_accept('application/json', 'application/vnd.openstack') def handler(req): """Fake handler to test decorator.""" return True def test_fail_no_match(self): req = webob.Request.blank('/') req.accept = 'text/plain' error = self.assertRaises(webob.exc.HTTPNotAcceptable, self.handler, req) self.assertEqual( 'Only application/json, application/vnd.openstack is provided', str(error)) def test_fail_complex_no_match(self): req = webob.Request.blank('/') req.accept = 'text/html;q=0.9,text/plain,application/vnd.aws;q=0.8' error = self.assertRaises(webob.exc.HTTPNotAcceptable, self.handler, req) self.assertEqual( 'Only application/json, application/vnd.openstack is provided', str(error)) def test_success_no_accept(self): req = webob.Request.blank('/') self.assertTrue(self.handler(req)) def test_success_simple_match(self): req = webob.Request.blank('/') req.accept = 'application/json' self.assertTrue(self.handler(req)) def test_success_complex_any_match(self): req = webob.Request.blank('/') req.accept = 'application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' self.assertTrue(self.handler(req)) def test_success_complex_lower_quality_match(self): req = webob.Request.blank('/') req.accept = 'application/xml;q=0.9,application/vnd.openstack;q=0.8' self.assertTrue(self.handler(req)) class TestExtractJSON(testtools.TestCase): # Although the intent of this test class is not to test that # schemas work, we may as well use a real one to ensure that # behaviors are what we expect. schema = { "type": "object", "properties": { "name": {"type": "string"}, "uuid": {"type": "string", "format": "uuid"} }, "required": ["name"], "additionalProperties": False } def test_not_json(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, 'I am a string', self.schema) self.assertIn('Malformed JSON', str(error)) def test_malformed_json(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"my bytes got left behind":}', self.schema) self.assertIn('Malformed JSON', str(error)) def test_schema_mismatch(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"a": "b"}', self.schema) self.assertIn('JSON does not validate', str(error)) def test_type_invalid(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"name": 1}', self.schema) self.assertIn('JSON does not validate', str(error)) def test_format_checker(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"name": "hello", "uuid": "not a uuid"}', self.schema) self.assertIn('JSON does not validate', str(error)) def test_no_additional_properties(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"name": "hello", "cow": "moo"}', self.schema) self.assertIn('JSON does not validate', str(error)) def test_valid(self): data = util.extract_json( '{"name": "cow", ' '"uuid": "%s"}' % uuidsentinel.rp_uuid, self.schema) self.assertEqual('cow', data['name']) self.assertEqual(uuidsentinel.rp_uuid, data['uuid']) class QueryParamsSchemaTestCase(testtools.TestCase): def test_validate_request(self): schema = { 'type': 'object', 'properties': { 'foo': {'type': 'string'} }, 'additionalProperties': False} req = webob.Request.blank('/test?foo=%88') error = self.assertRaises(webob.exc.HTTPBadRequest, util.validate_query_params, req, schema) self.assertIn('Invalid query string parameters', str(error)) class TestJSONErrorFormatter(testtools.TestCase): def setUp(self): super(TestJSONErrorFormatter, self).setUp() self.environ = {} # TODO(jaypipes): Remove this when we get more than a single version # in the placement API. The fact that we only had a single version was # masking a bug in the utils code. _versions = [ '1.0', '1.1', ] self.useFixture(fixtures.MonkeyPatch('placement.microversion.VERSIONS', _versions)) def test_status_to_int_code(self): body = '' status = '404 Not Found' title = '' result = util.json_error_formatter( body, status, title, self.environ) self.assertEqual(404, result['errors'][0]['status']) def test_strip_body_tags(self): body = '

Big Error!

' status = '400 Bad Request' title = '' result = util.json_error_formatter( body, status, title, self.environ) self.assertEqual('Big Error!', result['errors'][0]['detail']) def test_request_id_presence(self): body = '' status = '400 Bad Request' title = '' # no request id in environ, none in error result = util.json_error_formatter( body, status, title, self.environ) self.assertNotIn('request_id', result['errors'][0]) # request id in environ, request id in error self.environ[request_id.ENV_REQUEST_ID] = 'stub-id' result = util.json_error_formatter( body, status, title, self.environ) self.assertEqual('stub-id', result['errors'][0]['request_id']) def test_microversion_406_handling(self): body = '' status = '400 Bad Request' title = '' # Not a 406, no version info required. result = util.json_error_formatter( body, status, title, self.environ) self.assertNotIn('max_version', result['errors'][0]) self.assertNotIn('min_version', result['errors'][0]) # A 406 but not because of microversions (microversion # parsing was successful), no version info # required. status = '406 Not Acceptable' version_obj = microversion_parse.parse_version_string('2.3') self.environ[microversion.MICROVERSION_ENVIRON] = version_obj result = util.json_error_formatter( body, status, title, self.environ) self.assertNotIn('max_version', result['errors'][0]) self.assertNotIn('min_version', result['errors'][0]) # Microversion parsing failed, status is 406, send version info. del self.environ[microversion.MICROVERSION_ENVIRON] result = util.json_error_formatter( body, status, title, self.environ) self.assertEqual(microversion.max_version_string(), result['errors'][0]['max_version']) self.assertEqual(microversion.min_version_string(), result['errors'][0]['min_version']) class TestRequireContent(testtools.TestCase): """Confirm behavior of util.require_accept.""" @staticmethod @util.require_content('application/json') def handler(req): """Fake handler to test decorator.""" return True def test_fail_no_content_type(self): req = webob.Request.blank('/') error = self.assertRaises(webob.exc.HTTPUnsupportedMediaType, self.handler, req) self.assertEqual( 'The media type None is not supported, use application/json', str(error)) def test_fail_wrong_content_type(self): req = webob.Request.blank('/') req.content_type = 'text/plain' error = self.assertRaises(webob.exc.HTTPUnsupportedMediaType, self.handler, req) self.assertEqual( 'The media type text/plain is not supported, use application/json', str(error)) def test_success_content_type(self): req = webob.Request.blank('/') req.content_type = 'application/json' self.assertTrue(self.handler(req)) class TestPlacementURLs(base.ContextTestCase): def setUp(self): super(TestPlacementURLs, self).setUp() fake_context = context.RequestContext( user_id='fake', project_id='fake') self.resource_provider = rp_obj.ResourceProvider( fake_context, name=uuidsentinel.rp_name, uuid=uuidsentinel.rp_uuid) self.resource_class = rc_obj.ResourceClass( fake_context, name='CUSTOM_BAREMETAL_GOLD', id=1000) def test_resource_provider_url(self): environ = {} expected_url = '/resource_providers/%s' % uuidsentinel.rp_uuid self.assertEqual(expected_url, util.resource_provider_url( environ, self.resource_provider)) def test_resource_provider_url_prefix(self): # SCRIPT_NAME represents the mount point of a WSGI # application when it is hosted at a path/prefix. environ = {'SCRIPT_NAME': '/placement'} expected_url = ('/placement/resource_providers/%s' % uuidsentinel.rp_uuid) self.assertEqual(expected_url, util.resource_provider_url( environ, self.resource_provider)) def test_inventories_url(self): environ = {} expected_url = ('/resource_providers/%s/inventories' % uuidsentinel.rp_uuid) self.assertEqual(expected_url, util.inventory_url( environ, self.resource_provider)) def test_inventory_url(self): resource_class = 'DISK_GB' environ = {} expected_url = ('/resource_providers/%s/inventories/%s' % (uuidsentinel.rp_uuid, resource_class)) self.assertEqual(expected_url, util.inventory_url( environ, self.resource_provider, resource_class)) def test_resource_class_url(self): environ = {} expected_url = '/resource_classes/CUSTOM_BAREMETAL_GOLD' self.assertEqual(expected_url, util.resource_class_url( environ, self.resource_class)) def test_resource_class_url_prefix(self): # SCRIPT_NAME represents the mount point of a WSGI # application when it is hosted at a path/prefix. environ = {'SCRIPT_NAME': '/placement'} expected_url = '/placement/resource_classes/CUSTOM_BAREMETAL_GOLD' self.assertEqual(expected_url, util.resource_class_url( environ, self.resource_class)) class TestNormalizeResourceQsParam(testtools.TestCase): def test_success(self): qs = "VCPU:1" resources = util.normalize_resources_qs_param(qs) expected = { 'VCPU': 1, } self.assertEqual(expected, resources) qs = "VCPU:1,MEMORY_MB:1024,DISK_GB:100" resources = util.normalize_resources_qs_param(qs) expected = { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 100, } self.assertEqual(expected, resources) def test_400_empty_string(self): qs = "" self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_resources_qs_param, qs, ) def test_400_bad_int(self): qs = "VCPU:foo" self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_resources_qs_param, qs, ) def test_400_no_amount(self): qs = "VCPU" self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_resources_qs_param, qs, ) def test_400_zero_amount(self): qs = "VCPU:0" self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_resources_qs_param, qs, ) class TestNormalizeTraitsQsParamLegacy(testtools.TestCase): def test_one(self): trait = 'HW_CPU_X86_VMX' # Various whitespace permutations for fmt in ('%s', ' %s', '%s ', ' %s ', ' %s '): self.assertEqual( set([trait]), util.normalize_traits_qs_param_to_legacy_value(fmt % trait) ) def test_multiple(self): traits = ( 'HW_CPU_X86_VMX', 'HW_GPU_API_DIRECT3D_V12_0', 'HW_NIC_OFFLOAD_RX', 'CUSTOM_GOLD', 'STORAGE_DISK_SSD', ) self.assertEqual( set(traits), util.normalize_traits_qs_param_to_legacy_value( '%s, %s,%s , %s , %s ' % traits) ) def test_400_all_empty(self): for qs in ('', ' ', ' ', ',', ' , , '): self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_param_to_legacy_value, qs) def test_400_some_empty(self): traits = ( 'HW_NIC_OFFLOAD_RX', 'CUSTOM_GOLD', 'STORAGE_DISK_SSD', ) for fmt in ('%s,,%s,%s', ',%s,%s,%s', '%s,%s,%s,', ' %s , %s , , %s'): self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_param_to_legacy_value, fmt % traits) class TestNormalizeTraitsQsParam(testtools.TestCase): def test_one(self): trait = 'HW_CPU_X86_VMX' # Various whitespace permutations for fmt in ('%s', ' %s', '%s ', ' %s ', ' %s '): self.assertEqual( ([{trait}], set()), util.normalize_traits_qs_param(fmt % trait) ) def test_multiple(self): traits = ( 'HW_CPU_X86_VMX', 'HW_GPU_API_DIRECT3D_V12_0', 'HW_NIC_OFFLOAD_RX', 'CUSTOM_GOLD', 'STORAGE_DISK_SSD', ) self.assertEqual( ([{trait} for trait in traits], set()), util.normalize_traits_qs_param( '%s, %s,%s , %s , %s ' % traits) ) def test_400_all_empty(self): for qs in ('', ' ', ' ', ',', ' , , '): self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_param, qs) def test_400_some_empty(self): traits = ( 'HW_NIC_OFFLOAD_RX', 'CUSTOM_GOLD', 'STORAGE_DISK_SSD', ) for fmt in ( '%s,,%s,%s', ',%s,%s,%s', '%s,%s,%s,', ' %s , %s , , %s', '!,%s,%s,%s', ): self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_param, fmt % traits, allow_forbidden=True, ) def test_multiple_with_forbidden(self): traits = ( '!HW_CPU_X86_VMX', 'HW_GPU_API_DIRECT3D_V12_0', '!HW_NIC_OFFLOAD_RX', 'CUSTOM_GOLD', '!STORAGE_DISK_SSD', ) self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_param, '%s, %s,%s , %s , %s ' % traits, allow_forbidden=False) self.assertEqual( ( [{'HW_GPU_API_DIRECT3D_V12_0'}, {'CUSTOM_GOLD'}], {'HW_CPU_X86_VMX', 'HW_NIC_OFFLOAD_RX', 'STORAGE_DISK_SSD'}), util.normalize_traits_qs_param( '%s, %s,%s , %s , %s ' % traits, allow_forbidden=True) ) def test_any_traits(self): param = 'in:T1 ,T2 , T3' self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_param, param, allow_any_traits=False ) self.assertEqual( ([{'T1', 'T2', 'T3'}], set()), util.normalize_traits_qs_param(param, allow_any_traits=True) ) def test_any_traits_not_mix_with_forbidden(self): param = 'in:T1 ,!T2 , T3' self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_param, param, allow_forbidden=True, allow_any_traits=True, ) class TestNormalizeTraitsQsParams(testtools.TestCase): @staticmethod def _get_req(qstring, version): req = webob.Request.blank( '?' + qstring, ) mv_parsed = microversion_parse.Version(*version) mv_parsed.max_version = microversion_parse.parse_version_string( microversion.max_version_string() ) mv_parsed.min_version = microversion_parse.parse_version_string( microversion.min_version_string() ) req.environ[placement.microversion.MICROVERSION_ENVIRON] = mv_parsed return req def test_suffix(self): req = self._get_req('required=!BAZ&requiredX=FOO,BAR', (1, 38)) required, forbidden = util.normalize_traits_qs_params(req, suffix='') self.assertEqual([], required) self.assertEqual({'BAZ'}, forbidden) required, forbidden = util.normalize_traits_qs_params(req, suffix='X') self.assertEqual([{'FOO'}, {'BAR'}], required) self.assertEqual(set(), forbidden) def test_allow_forbidden_1_21(self): req = self._get_req('required=!BAZ', (1, 21)) ex = self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_params, req, suffix='', ) self.assertIn( "Invalid query string parameters: Expected 'required' parameter " "value of the form: HW_CPU_X86_VMX,CUSTOM_MAGIC. Got: !BAZ", str(ex), ) def test_allow_forbidden_1_22(self): req = self._get_req('required=!BAZ', (1, 22)) required, forbidden = util.normalize_traits_qs_params(req, suffix='') self.assertEqual([], required) self.assertEqual({'BAZ'}, forbidden) def test_repeated_param_1_38(self): req = self._get_req('required=FOO,!BAR&required=BAZ', (1, 38)) required, forbidden = util.normalize_traits_qs_params(req, suffix='') self.assertEqual([{'BAZ'}], required) self.assertEqual(set(), forbidden) def test_allow_any_traits_1_38(self): req = self._get_req('required=in:FOO,BAZ', (1, 38)) ex = self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_params, req, suffix='', ) self.assertIn( "Invalid query string parameters: " "The format 'in:HW_CPU_X86_VMX,CUSTOM_MAGIC' only supported " "since microversion 1.39. Got: in:FOO,BAZ", str(ex), ) def test_allow_any_traits_1_39(self): req = self._get_req('required=in:FOO,BAZ', (1, 39)) required, forbidden = util.normalize_traits_qs_params(req, suffix='') self.assertEqual([{'FOO', 'BAZ'}], required) self.assertEqual(set(), forbidden) def test_repeated_param_1_39(self): req = self._get_req( 'required=in:T1,T2' '&required=T3,!T4' '&required=in:T5,T6' '&required=!T7,T8', (1, 39) ) required, forbidden = util.normalize_traits_qs_params(req, suffix='') self.assertEqual( [{'T1', 'T2'}, {'T3'}, {'T5', 'T6'}, {'T8'}], required) self.assertEqual({'T4', 'T7'}, forbidden) class TestParseQsRequestGroups(testtools.TestCase): @staticmethod def do_parse(qstring, version=(1, 18)): """Converts a querystring to a MultiDict, mimicking request.GET, and runs dict_from_request on it. """ req = webob.Request.blank('?' + qstring) mv_parsed = microversion_parse.Version(*version) mv_parsed.max_version = microversion_parse.parse_version_string( microversion.max_version_string()) mv_parsed.min_version = microversion_parse.parse_version_string( microversion.min_version_string()) req.environ['placement.microversion'] = mv_parsed rqparam = pl.RequestWideParams.from_request(req) d = pl.RequestGroup.dict_from_request(req, rqparam) # Sort for easier testing return [d[suff] for suff in sorted(d)] def assertRequestGroupsEqual(self, expected, observed): self.assertEqual(len(expected), len(observed)) for exp, obs in zip(expected, observed): self.assertEqual(vars(exp), vars(obs)) def test_empty_raises(self): # TODO(efried): Check the specific error code self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, '') def test_unnumbered_only(self): """Unnumbered resources & traits - no numbered groupings.""" qs = ('resources=VCPU:2,MEMORY_MB:2048' '&required=HW_CPU_X86_VMX,CUSTOM_GOLD') expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, required_traits=[ {'HW_CPU_X86_VMX'}, {'CUSTOM_GOLD'} ], ), ] self.assertRequestGroupsEqual(expected, self.do_parse(qs)) def test_member_of_single_agg(self): """Unnumbered resources with one member_of query param.""" agg1_uuid = uuidsentinel.agg1 qs = ('resources=VCPU:2,MEMORY_MB:2048' '&member_of=%s' % agg1_uuid) expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, member_of=[ set([agg1_uuid]) ] ), ] self.assertRequestGroupsEqual(expected, self.do_parse(qs)) def test_member_of_multiple_aggs_prior_microversion(self): """Unnumbered resources with multiple member_of query params before the supported microversion should raise a 400. """ agg1_uuid = uuidsentinel.agg1 agg2_uuid = uuidsentinel.agg2 qs = ('resources=VCPU:2,MEMORY_MB:2048' '&member_of=%s' '&member_of=%s' % (agg1_uuid, agg2_uuid)) self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) def test_member_of_multiple_aggs(self): """Unnumbered resources with multiple member_of query params.""" agg1_uuid = uuidsentinel.agg1 agg2_uuid = uuidsentinel.agg2 qs = ('resources=VCPU:2,MEMORY_MB:2048' '&member_of=%s' '&member_of=%s' % (agg1_uuid, agg2_uuid)) expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, member_of=[ set([agg1_uuid]), set([agg2_uuid]) ] ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 24))) def test_unnumbered_resources_only(self): """Validate the bit that can be used for 1.10 and earlier.""" qs = 'resources=VCPU:2,MEMORY_MB:2048,DISK_GB:5,CUSTOM_MAGIC:123' expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, 'DISK_GB': 5, 'CUSTOM_MAGIC': 123, }, ), ] self.assertRequestGroupsEqual(expected, self.do_parse(qs)) def test_numbered_only(self): # Crazy ordering and nonsequential numbers don't matter. # It's okay to have a 'resources' without a 'required'. # A trait that's repeated shows up in both spots. qs = ('resources1=VCPU:2,MEMORY_MB:2048' '&required42=CUSTOM_GOLD' '&resources99=DISK_GB:5' '&resources42=CUSTOM_MAGIC:123' '&required1=HW_CPU_X86_VMX,CUSTOM_GOLD') expected = [ pl.RequestGroup( resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, required_traits=[ {'HW_CPU_X86_VMX'}, {'CUSTOM_GOLD'} ], ), pl.RequestGroup( resources={ 'CUSTOM_MAGIC': 123, }, required_traits=[ {'CUSTOM_GOLD'} ], ), pl.RequestGroup( resources={ 'DISK_GB': 5, }, ), ] self.assertRequestGroupsEqual(expected, self.do_parse(qs)) def test_numbered_and_unnumbered(self): qs = ('resources=VCPU:3,MEMORY_MB:4096,DISK_GB:10' '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources1=SRIOV_NET_VF:2' '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources2=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD' '&resources3=CUSTOM_MAGIC:123') expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 3, 'MEMORY_MB': 4096, 'DISK_GB': 10, }, required_traits=[ {'HW_CPU_X86_VMX'}, {'CUSTOM_MEM_FLASH'}, {'STORAGE_DISK_SSD'} ], ), pl.RequestGroup( resources={ 'SRIOV_NET_VF': 2, }, required_traits=[ {'CUSTOM_PHYSNET_PRIVATE'}, ], ), pl.RequestGroup( resources={ 'SRIOV_NET_VF': 1, 'NET_INGRESS_BYTES_SEC': 20000, 'NET_EGRESS_BYTES_SEC': 10000, }, required_traits=[ {'CUSTOM_SWITCH_BIG'}, {'CUSTOM_PHYSNET_PROD'}, ], ), pl.RequestGroup( resources={ 'CUSTOM_MAGIC': 123, }, ), ] self.assertRequestGroupsEqual(expected, self.do_parse(qs)) def test_member_of_multiple_aggs_numbered(self): """Numbered resources with multiple member_of query params.""" agg1_uuid = uuidsentinel.agg1 agg2_uuid = uuidsentinel.agg2 agg3_uuid = uuidsentinel.agg3 agg4_uuid = uuidsentinel.agg4 qs = ('resources1=VCPU:2' '&member_of1=%s' '&member_of1=%s' '&resources2=VCPU:2' '&member_of2=in:%s,%s' % ( agg1_uuid, agg2_uuid, agg3_uuid, agg4_uuid)) expected = [ pl.RequestGroup( resources={ 'VCPU': 2, }, member_of=[ set([agg1_uuid]), set([agg2_uuid]) ] ), pl.RequestGroup( resources={ 'VCPU': 2, }, member_of=[ set([agg3_uuid, agg4_uuid]), ] ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 24))) def test_member_of_forbidden_aggs(self): agg1_uuid = uuidsentinel.agg1 agg2_uuid = uuidsentinel.agg2 agg3_uuid = uuidsentinel.agg3 agg4_uuid = uuidsentinel.agg4 qs = ('resources=VCPU:2' '&member_of=%s' '&member_of=%s' '&member_of=!%s' '&member_of=!%s' % ( agg1_uuid, agg2_uuid, agg3_uuid, agg4_uuid)) expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, }, member_of=[ set([agg1_uuid]), set([agg2_uuid]), ], forbidden_aggs=set( [agg3_uuid, agg4_uuid] ), ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 32))) def test_member_of_multiple_forbidden_aggs(self): agg1_uuid = uuidsentinel.agg1 agg2_uuid = uuidsentinel.agg2 agg3_uuid = uuidsentinel.agg3 qs = ('resources=VCPU:2' '&member_of=!in:%s,%s,%s' % ( agg1_uuid, agg2_uuid, agg3_uuid)) expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, }, forbidden_aggs=set( [agg1_uuid, agg2_uuid, agg3_uuid] ), ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 32))) def test_member_of_forbidden_aggs_prior_microversion(self): agg1_uuid = uuidsentinel.agg1 agg2_uuid = uuidsentinel.agg2 qs = ('resources=VCPU:2' '&member_of=!%s' '&member_of=!%s' % (agg1_uuid, agg2_uuid)) self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 31)) qs = ('resources=VCPU:2' '&member_of=!in:%s,%s' % (agg1_uuid, agg2_uuid)) self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 31)) def test_member_of_forbidden_aggs_invalid_usage(self): agg1_uuid = uuidsentinel.agg1 agg2_uuid = uuidsentinel.agg2 qs = ('resources=VCPU:2' '&member_of=in:%s,!%s' % (agg1_uuid, agg2_uuid)) self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 32)) agg1_uuid = uuidsentinel.agg1 agg2_uuid = uuidsentinel.agg2 qs = ('resources=VCPU:2' '&member_of=!%s,!%s' % (agg1_uuid, agg2_uuid)) self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 32)) def test_400_malformed_resources(self): # Somewhat duplicates TestNormalizeResourceQsParam.test_400*. qs = ('resources=VCPU:0,MEMORY_MB:4096,DISK_GB:10' # Bad ----------^ '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources1=SRIOV_NET_VF:2' '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources2=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD' '&resources3=CUSTOM_MAGIC:123') self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) def test_400_malformed_traits(self): # Somewhat duplicates TestNormalizeResourceQsParam.test_400*. qs = ('resources=VCPU:7,MEMORY_MB:4096,DISK_GB:10' '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources1=SRIOV_NET_VF:2' '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources2=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD,' # Bad -------------------------------------------^ '&resources3=CUSTOM_MAGIC:123') self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) def test_400_traits_no_resources_unnumbered(self): qs = ('resources9=VCPU:7,MEMORY_MB:4096,DISK_GB:10' # Oops ---^ '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources1=SRIOV_NET_VF:2' '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources2=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD' '&resources3=CUSTOM_MAGIC:123') self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) def test_400_traits_no_resources_numbered(self): qs = ('resources=VCPU:7,MEMORY_MB:4096,DISK_GB:10' '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources11=SRIOV_NET_VF:2' # Oops ----^^ '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources20=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' # Oops ----^^ ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD' '&resources3=CUSTOM_MAGIC:123') self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) def test_400_member_of_no_resources_numbered(self): agg1_uuid = uuidsentinel.agg1 qs = ('resources=VCPU:7,MEMORY_MB:4096,DISK_GB:10' '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&member_of2=%s' % agg1_uuid) self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) def test_forbidden_one_group(self): """When forbidden are allowed this will parse, but otherwise will indicate an invalid trait. """ qs = ('resources=VCPU:2,MEMORY_MB:2048' '&required=CUSTOM_PHYSNET1,!CUSTOM_SWITCH_BIG') expected_forbidden = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, required_traits=[ {'CUSTOM_PHYSNET1'}, ], forbidden_traits={ 'CUSTOM_SWITCH_BIG', } ), ] expected_message = ( "Invalid query string parameters: Expected 'required' parameter " "value of the form: HW_CPU_X86_VMX,CUSTOM_MAGIC. Got: " "CUSTOM_PHYSNET1,!CUSTOM_SWITCH_BIG") exc = self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) self.assertEqual(expected_message, str(exc)) self.assertRequestGroupsEqual( expected_forbidden, self.do_parse(qs, version=(1, 22))) def test_forbidden_conflict(self): qs = ('resources=VCPU:2,MEMORY_MB:2048' '&required=CUSTOM_PHYSNET1,!CUSTOM_PHYSNET1') expected_message = ( 'Conflicting required and forbidden traits found ' 'in the following traits keys: required: (CUSTOM_PHYSNET1)') exc = self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 22)) self.assertEqual(expected_message, str(exc)) def test_forbidden_two_groups(self): qs = ('resources=VCPU:2,MEMORY_MB:2048&resources1=CUSTOM_MAGIC:1' '&required1=CUSTOM_PHYSNET1,!CUSTOM_PHYSNET2') expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, ), pl.RequestGroup( resources={ 'CUSTOM_MAGIC': 1, }, required_traits=[ {'CUSTOM_PHYSNET1'}, ], forbidden_traits={ 'CUSTOM_PHYSNET2', } ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 22))) def test_forbidden_separate_groups_no_conflict(self): qs = ('resources1=CUSTOM_MAGIC:1&required1=CUSTOM_PHYSNET1' '&resources2=CUSTOM_MAGIC:1&required2=!CUSTOM_PHYSNET1') expected = [ pl.RequestGroup( use_same_provider=True, resources={ 'CUSTOM_MAGIC': 1, }, required_traits=[ {'CUSTOM_PHYSNET1'}, ], ), pl.RequestGroup( use_same_provider=True, resources={ 'CUSTOM_MAGIC': 1, }, forbidden_traits={ 'CUSTOM_PHYSNET1', } ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 22))) def test_group_suffix_length_1_33(self): longstring = '01234567' * 8 qs = 'resources_%s=CUSTOM_MAGIC:1' % longstring exc = self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 33)) # NOTE(cdent): This error message is not what an API user would see. # They would get an error during JSON schema processing. self.assertIn('least one request group', str(exc)) def test_group_suffix_character_limits_1_33(self): qs = 'resources!#%=CUSTOM_MAGIC:1' exc = self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 33)) # NOTE(cdent): This error message is not what an API user would see. # They would get an error during JSON schema processing. self.assertIn('least one request group', str(exc)) def test_group_suffix_character_limits_1_22(self): qs = 'resources!#%=CUSTOM_MAGIC:1' exc = self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 22)) # NOTE(cdent): This error message is not what an API user would see. # They would get an error during JSON schema processing. self.assertIn('least one request group', str(exc)) def test_good_suffix_1_33(self): qs = ('resources_car_HOUSE_10=CUSTOM_MAGIC:1' '&required_car_HOUSE_10=CUSTOM_PHYSNET1') expected = [ pl.RequestGroup( use_same_provider=True, resources={ 'CUSTOM_MAGIC': 1, }, required_traits=[ {'CUSTOM_PHYSNET1'}, ], ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 33))) self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 22)) def test_any_traits_1_38(self): qs = 'resources1=RABBIT:1&required1=in:WHITE,BLACK' exc = self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 38)) self.assertIn( "The format 'in:HW_CPU_X86_VMX,CUSTOM_MAGIC' only supported since " "microversion 1.39.", str(exc)) def test_any_traits_1_39(self): qs = 'resources1=RABBIT:1&required1=in:WHITE,BLACK' expected = [ pl.RequestGroup( use_same_provider=True, resources={ 'RABBIT': 1, }, required_traits=[ {'WHITE', 'BLACK'}, ], ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 39))) def test_any_traits_repeated(self): qs = 'resources1=CUSTOM_MAGIC:1&required1=in:T1,T2&required1=T3,!T4' expected = [ pl.RequestGroup( use_same_provider=True, resources={ 'CUSTOM_MAGIC': 1, }, required_traits=[ {'T1', 'T2'}, {'T3'}, ], forbidden_traits={ 'T4' }, ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 39))) def test_any_traits_multiple_groups(self): qs = ('resources=RABBIT:1&required=in:WHITE,BLACK&' 'resources2=CAT:2&required2=in:SILVER,RED&required2=!SPOTTED') expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'RABBIT': 1, }, required_traits=[ {'WHITE', 'BLACK'}, ], forbidden_traits={ }, ), pl.RequestGroup( use_same_provider=True, resources={ 'CAT': 2, }, required_traits=[ {'SILVER', 'RED'}, ], forbidden_traits={ 'SPOTTED' }, ), ] self.assertRequestGroupsEqual( expected, self.do_parse(qs, version=(1, 39))) def test_any_traits_forbidden_conflict(self): # going against one part of an OR expression is not a conflict as the # other parts still can match and fulfill the query qs = ('resources=VCPU:2' '&required=in:CUSTOM_PHYSNET1,CUSTOM_PHYSNET2' '&required=!CUSTOM_PHYSNET1') rgs = self.do_parse(qs, version=(1, 39)) self.assertEqual(1, len(rgs)) # but going against all parts of an OR expression is a conflict qs = ('resources=VCPU:2' '&required=in:CUSTOM_PHYSNET1,CUSTOM_PHYSNET2' '&required=!CUSTOM_PHYSNET1,!CUSTOM_PHYSNET2') expected_message = ( 'Conflicting required and forbidden traits found ' 'in the following traits keys: required: ' '(CUSTOM_PHYSNET1, CUSTOM_PHYSNET2)') exc = self.assertRaises( webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 39)) self.assertEqual(expected_message, str(exc)) def test_stringification(self): agg1 = uuidsentinel.agg1 agg2 = uuidsentinel.agg2 qs = (f'resources1=CAT:2&required1=in:SILVER,RED&' f'required1=TABBY,!SPOTTED&member_of1=in:{agg1},{agg2}') rgs = self.do_parse(qs, version=(1, 39)) self.assertEqual(1, len(rgs)) self.assertEqual( 'RequestGroup(' 'use_same_provider=True, ' 'resources={CAT:2}, ' 'traits=((RED or SILVER) and TABBY and !SPOTTED), ' f'aggregates=[[{", ".join(sorted([agg1, agg2]))}]])', str(rgs[0]) ) class TestPickLastModified(base.ContextTestCase): def setUp(self): super(TestPickLastModified, self).setUp() fake_context = context.RequestContext( user_id='fake', project_id='fake') self.resource_provider = rp_obj.ResourceProvider( fake_context, name=uuidsentinel.rp_name, uuid=uuidsentinel.rp_uuid) def test_updated_versus_none(self): now = timeutils.utcnow(with_timezone=True) self.resource_provider.updated_at = now self.resource_provider.created_at = now chosen_time = util.pick_last_modified(None, self.resource_provider) self.assertEqual(now, chosen_time) def test_created_versus_none(self): now = timeutils.utcnow(with_timezone=True) self.resource_provider.created_at = now self.resource_provider.updated_at = None chosen_time = util.pick_last_modified(None, self.resource_provider) self.assertEqual(now, chosen_time) def test_last_modified_less(self): now = timeutils.utcnow(with_timezone=True) less = now - datetime.timedelta(seconds=300) self.resource_provider.updated_at = now self.resource_provider.created_at = now chosen_time = util.pick_last_modified(less, self.resource_provider) self.assertEqual(now, chosen_time) def test_last_modified_more(self): now = timeutils.utcnow(with_timezone=True) more = now + datetime.timedelta(seconds=300) self.resource_provider.updated_at = now self.resource_provider.created_at = now chosen_time = util.pick_last_modified(more, self.resource_provider) self.assertEqual(more, chosen_time) def test_last_modified_same(self): now = timeutils.utcnow(with_timezone=True) self.resource_provider.updated_at = now self.resource_provider.created_at = now chosen_time = util.pick_last_modified(now, self.resource_provider) self.assertEqual(now, chosen_time) def test_no_object_time_fields_less(self): # An unsaved ovo will not have the created_at or updated_at fields # present on the object at all. now = timeutils.utcnow(with_timezone=True) less = now - datetime.timedelta(seconds=300) with mock.patch('oslo_utils.timeutils.utcnow') as mock_utc: mock_utc.return_value = now chosen_time = util.pick_last_modified( less, self.resource_provider) self.assertEqual(now, chosen_time) mock_utc.assert_called_once_with(with_timezone=True) def test_no_object_time_fields_more(self): # An unsaved ovo will not have the created_at or updated_at fields # present on the object at all. now = timeutils.utcnow(with_timezone=True) more = now + datetime.timedelta(seconds=300) with mock.patch('oslo_utils.timeutils.utcnow') as mock_utc: mock_utc.return_value = now chosen_time = util.pick_last_modified( more, self.resource_provider) self.assertEqual(more, chosen_time) mock_utc.assert_called_once_with(with_timezone=True) def test_no_object_time_fields_none(self): # An unsaved ovo will not have the created_at or updated_at fields # present on the object at all. now = timeutils.utcnow(with_timezone=True) with mock.patch('oslo_utils.timeutils.utcnow') as mock_utc: mock_utc.return_value = now chosen_time = util.pick_last_modified( None, self.resource_provider) self.assertEqual(now, chosen_time) mock_utc.assert_called_once_with(with_timezone=True) class RunOnceTests(testtools.TestCase): fake_logger = mock.MagicMock() @util.run_once("already ran once", fake_logger) def dummy_test_func(self, fail=False): if fail: raise ValueError() return True def setUp(self): super(RunOnceTests, self).setUp() self.dummy_test_func.reset() RunOnceTests.fake_logger.reset_mock() def test_wrapped_funtions_called_once(self): self.assertFalse(self.dummy_test_func.called) result = self.dummy_test_func() self.assertTrue(result) self.assertTrue(self.dummy_test_func.called) # assert that on second invocation no result # is returned and that the logger is invoked. result = self.dummy_test_func() RunOnceTests.fake_logger.assert_called_once() self.assertIsNone(result) def test_wrapped_funtions_called_once_raises(self): self.assertFalse(self.dummy_test_func.called) self.assertRaises(ValueError, self.dummy_test_func, fail=True) self.assertTrue(self.dummy_test_func.called) # assert that on second invocation no result # is returned and that the logger is invoked. result = self.dummy_test_func() RunOnceTests.fake_logger.assert_called_once() self.assertIsNone(result) def test_wrapped_funtions_can_be_reset(self): # assert we start with a clean state self.assertFalse(self.dummy_test_func.called) result = self.dummy_test_func() self.assertTrue(result) self.dummy_test_func.reset() # assert we restored a clean state self.assertFalse(self.dummy_test_func.called) result = self.dummy_test_func() self.assertTrue(result) # assert that we never called the logger RunOnceTests.fake_logger.assert_not_called() def test_reset_calls_cleanup(self): mock_clean = mock.Mock() @util.run_once("already ran once", self.fake_logger, cleanup=mock_clean) def f(): pass f() self.assertTrue(f.called) f.reset() self.assertFalse(f.called) mock_clean.assert_called_once_with() def test_clean_is_not_called_at_reset_if_wrapped_not_called(self): mock_clean = mock.Mock() @util.run_once("already ran once", self.fake_logger, cleanup=mock_clean) def f(): pass self.assertFalse(f.called) f.reset() self.assertFalse(f.called) self.assertFalse(mock_clean.called) def test_reset_works_even_if_cleanup_raises(self): mock_clean = mock.Mock(side_effect=ValueError()) @util.run_once("already ran once", self.fake_logger, cleanup=mock_clean) def f(): pass f() self.assertTrue(f.called) self.assertRaises(ValueError, f.reset) self.assertFalse(f.called) mock_clean.assert_called_once_with() class RoundRobinTests(testtools.TestCase): def test_no_input(self): self.assertEqual([], list(roundrobin())) def test_single_input(self): self.assertEqual([1, 2], list(roundrobin(iter([1, 2])))) def test_balanced_inputs(self): self.assertEqual( [1, "x", 2, "y"], list(roundrobin( iter([1, 2]), iter(["x", "y"])) ) ) def test_unbalanced_inputs(self): self.assertEqual( ["A", "D", "E", "B", "F", "C"], list(roundrobin( iter("ABC"), iter("D"), iter("EF")) ) ) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/util.py0000664000175000017500000006004400000000000021140 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility methods for placement API.""" import functools import itertools import jsonschema from oslo_log import log as logging from oslo_middleware import request_id from oslo_serialization import jsonutils from oslo_utils import timeutils from oslo_utils import uuidutils import webob from placement import errors # NOTE(cdent): avoid cyclical import conflict between util and # microversion import placement.microversion LOG = logging.getLogger(__name__) # Error code handling constants ENV_ERROR_CODE = 'placement.error_code' ERROR_CODE_MICROVERSION = (1, 23) _FORMAT_CHECKER = jsonschema.FormatChecker() @_FORMAT_CHECKER.checks('uuid') def _validate_uuid_format(instance): return uuidutils.is_uuid_like(instance) def check_accept(*types): """If accept is set explicitly, try to follow it. If there is no match for the incoming accept header send a 406 response code. If accept is not set send our usual content-type in response. """ def decorator(f): @functools.wraps(f) def decorated_function(req): if req.accept: best_matches = req.accept.acceptable_offers(types) if not best_matches: type_string = ', '.join(types) raise webob.exc.HTTPNotAcceptable( 'Only %(type)s is provided' % {'type': type_string}, json_formatter=json_error_formatter) return f(req) return decorated_function return decorator def extract_json(body, schema): """Extract JSON from a body and validate with the provided schema.""" try: data = jsonutils.loads(body) except ValueError as exc: raise webob.exc.HTTPBadRequest( 'Malformed JSON: %(error)s' % {'error': exc}, json_formatter=json_error_formatter) try: jsonschema.validate(data, schema, format_checker=_FORMAT_CHECKER) except jsonschema.ValidationError as exc: raise webob.exc.HTTPBadRequest( 'JSON does not validate: %(error)s' % {'error': exc}, json_formatter=json_error_formatter) return data def inventory_url(environ, resource_provider, resource_class=None): url = '%s/inventories' % resource_provider_url(environ, resource_provider) if resource_class: url = '%s/%s' % (url, resource_class) return url def json_error_formatter(body, status, title, environ): """A json_formatter for webob exceptions. Follows API-WG guidelines at http://specs.openstack.org/openstack/api-wg/guidelines/errors.html """ # Shortcut to microversion module, to avoid wraps below. microversion = placement.microversion # Clear out the html that webob sneaks in. body = webob.exc.strip_tags(body) # Get status code out of status message. webob's error formatter # only passes entire status string. status_code = int(status.split(None, 1)[0]) error_dict = { 'status': status_code, 'title': title, 'detail': body } # Version may not be set if we have experienced an error before it # is set. want_version = environ.get(microversion.MICROVERSION_ENVIRON) if want_version and want_version.matches(ERROR_CODE_MICROVERSION): error_dict['code'] = environ.get(ENV_ERROR_CODE, errors.DEFAULT) # If the request id middleware has had a chance to add an id, # put it in the error response. if request_id.ENV_REQUEST_ID in environ: error_dict['request_id'] = environ[request_id.ENV_REQUEST_ID] # When there is a no microversion in the environment and a 406, # microversion parsing failed so we need to include microversion # min and max information in the error response. if status_code == 406 and microversion.MICROVERSION_ENVIRON not in environ: error_dict['max_version'] = microversion.max_version_string() error_dict['min_version'] = microversion.min_version_string() return {'errors': [error_dict]} def pick_last_modified(last_modified, obj): """Choose max of last_modified and obj.updated_at or obj.created_at. If updated_at is not implemented in `obj` use the current time in UTC. """ current_modified = (obj.updated_at or obj.created_at) if current_modified is None: # The object was not loaded from the DB, it was created in # the current context. current_modified = timeutils.utcnow(with_timezone=True) if last_modified: last_modified = max(last_modified, current_modified) else: last_modified = current_modified return last_modified def require_content(content_type): """Decorator to require a content type in a handler.""" def decorator(f): @functools.wraps(f) def decorated_function(req): if req.content_type != content_type: # webob's unset content_type is the empty string so # set it the error message content to 'None' to make # a useful message in that case. This also avoids a # KeyError raised when webob.exc eagerly fills in a # Template for output we will never use. if not req.content_type: req.content_type = 'None' raise webob.exc.HTTPUnsupportedMediaType( 'The media type %(bad_type)s is not supported, ' 'use %(good_type)s' % {'bad_type': req.content_type, 'good_type': content_type}, json_formatter=json_error_formatter) else: return f(req) return decorated_function return decorator def resource_class_url(environ, resource_class): """Produce the URL for a resource class. If SCRIPT_NAME is present, it is the mount point of the placement WSGI app. """ prefix = environ.get('SCRIPT_NAME', '') return '%s/resource_classes/%s' % (prefix, resource_class.name) def resource_provider_url(environ, resource_provider): """Produce the URL for a resource provider. If SCRIPT_NAME is present, it is the mount point of the placement WSGI app. """ prefix = environ.get('SCRIPT_NAME', '') return '%s/resource_providers/%s' % (prefix, resource_provider.uuid) def trait_url(environ, trait): """Produce the URL for a trait. If SCRIPT_NAME is present, it is the mount point of the placement WSGI app. """ prefix = environ.get('SCRIPT_NAME', '') return '%s/traits/%s' % (prefix, trait.name) def validate_query_params(req, schema): try: # NOTE(Kevin_Zheng): The webob package throws UnicodeError when # param cannot be decoded. Catch this and raise HTTP 400. jsonschema.validate(dict(req.GET), schema, format_checker=jsonschema.FormatChecker()) except (jsonschema.ValidationError, UnicodeDecodeError) as exc: raise webob.exc.HTTPBadRequest( 'Invalid query string parameters: %(exc)s' % {'exc': exc}) def wsgi_path_item(environ, name): """Extract the value of a named field in a URL. Return None if the name is not present or there are no path items. """ # NOTE(cdent): For the time being we don't need to urldecode # the value as the entire placement API has paths that accept no # encoded values. try: return environ['wsgiorg.routing_args'][1][name] except (KeyError, IndexError): return None def normalize_resources_qs_param(qs): """Given a query string parameter for resources, validate it meets the expected format and return a dict of amounts, keyed by resource class name. The expected format of the resources parameter looks like so: $RESOURCE_CLASS_NAME:$AMOUNT,$RESOURCE_CLASS_NAME:$AMOUNT So, if the user was looking for resource providers that had room for an instance that will consume 2 vCPUs, 1024 MB of RAM and 50GB of disk space, they would use the following query string: ?resources=VCPU:2,MEMORY_MB:1024,DISK_GB:50 The returned value would be: { "VCPU": 2, "MEMORY_MB": 1024, "DISK_GB": 50, } :param qs: The value of the 'resources' query string parameter :raises `webob.exc.HTTPBadRequest` if the parameter's value isn't in the expected format. """ if qs.strip() == "": msg = ('Badly formed resources parameter. Expected resources ' 'query string parameter in form: ' '?resources=VCPU:2,MEMORY_MB:1024. Got: empty string.') raise webob.exc.HTTPBadRequest(msg) result = {} resource_tuples = qs.split(',') for rt in resource_tuples: try: rc_name, amount = rt.split(':') except ValueError: msg = ('Badly formed resources parameter. Expected resources ' 'query string parameter in form: ' '?resources=VCPU:2,MEMORY_MB:1024. Got: %s.') msg = msg % rt raise webob.exc.HTTPBadRequest(msg) try: amount = int(amount) except ValueError: msg = ('Requested resource %(resource_name)s expected positive ' 'integer amount. Got: %(amount)s.') msg = msg % { 'resource_name': rc_name, 'amount': amount, } raise webob.exc.HTTPBadRequest(msg) if amount < 1: msg = ('Requested resource %(resource_name)s requires ' 'amount >= 1. Got: %(amount)d.') msg = msg % { 'resource_name': rc_name, 'amount': amount, } raise webob.exc.HTTPBadRequest(msg) result[rc_name] = amount return result def normalize_traits_qs_param_to_legacy_value(val, allow_forbidden=False): """Parse a traits query string parameter value into the legacy return format. Note that this method doesn't know or care about the query parameter key, which may currently be of the form `required`, `required123`, etc., but which may someday also include `preferred`, etc. This method currently does no format validation of trait strings, other than to ensure they're not zero-length. This method only accepts query parameter value without 'in:' prefix support :param val: A traits query parameter value: a comma-separated string of trait names. :param allow_forbidden: If True, accept forbidden traits (that is, traits prefixed by '!') as a valid form when notifying the caller that the provided value is not properly formed. :return: A set of trait names or trait names prefixed with '!' :raises `webob.exc.HTTPBadRequest` if the val parameter is not in the expected format. """ # let's parse the query string to the new internal format required, forbidden = normalize_traits_qs_param(val, allow_forbidden) # then reformat that structure to the old format legacy_traits = set() for any_traits in required: # a legacy request does not have any-trait support so every internal # set expressing OR relationship should exactly contain one trait assert len(any_traits) == 1 legacy_traits.add(list(any_traits)[0]) for forbidden_trait in forbidden: legacy_traits.add('!' + forbidden_trait) return legacy_traits def normalize_traits_qs_param( val, allow_forbidden=False, allow_any_traits=False ): """Parse a traits query string parameter value. Note that this method doesn't know or care about the query parameter key, which may currently be of the form `required`, `required123`, etc., but which may someday also include `preferred`, etc. :param val: A traits query parameter value: either a comma-separated string of trait names including trait names with ! prefix, or a string with 'in:' prefix and of comma-separated list of trait names. The 'in:' prefixed string does not support trait names with ! prefix :param allow_forbidden: If True, accept forbidden traits (that is, traits prefixed by '!') as a valid form. :param allow_any_traits: if True, accept the 'in:' prefixed format. :return: a two tuple where: The first item is a list of set of traits. Each set of traits represents a set of required traits in an OR relationship, while different sets in the list represent required traits in an AND relationship. The second item is a set of forbidden traits. :raises `webob.exc.HTTPBadRequest` if the val parameter is not in the expected format. """ if val.startswith('in:'): if not allow_any_traits: msg = ( f"Invalid query string parameters: " f"The format 'in:HW_CPU_X86_VMX,CUSTOM_MAGIC' only supported " f"since microversion 1.39. Got: {val}") raise webob.exc.HTTPBadRequest(msg) any_traits = set(substr.strip() for substr in val[3:].split(',')) if not all(trait for trait in any_traits): msg = ( f"Invalid query string parameters: Expected 'required' " f"parameter value of the form: " f"in:HW_CPU_X86_VMX,CUSTOM_MAGIC. Got an empty trait in: " f"{val}") raise webob.exc.HTTPBadRequest(msg) if any(trait.startswith('!') for trait in any_traits): msg = ( f"Invalid query string parameters: " f"The format 'in:HW_CPU_X86_VMX,CUSTOM_MAGIC' does not " f"support forbidden traits. Got: {val}") raise webob.exc.HTTPBadRequest(msg) # the in: prefix means all the traits are in a single OR relationship # so we return [{every trait after the in: prefix}] return [any_traits], set() else: all_traits = [substr.strip() for substr in val.split(',')] # NOTE(gibi): lstrip will remove any number of consecutive '!' # characters from the beginning of the trait name. This means !!!!!FOO # is parsed as FOO. This is not a documented behavior of the API but # this is a bug that decided not to be fixed outside a microversion # bump. See # https://review.opendev.org/c/openstack/placement/+/826491/7/placement/util.py#426 forbidden_traits = { trait.lstrip('!') for trait in all_traits if trait.startswith('!')} if not all( trait for trait in itertools.chain(forbidden_traits, all_traits) ): expected_form = 'HW_CPU_X86_VMX,!CUSTOM_MAGIC' if not allow_forbidden: expected_form = 'HW_CPU_X86_VMX,CUSTOM_MAGIC' msg = ( f"Invalid query string parameters: Expected 'required' " f"parameter value of the form: {expected_form}. " f"Got an empty trait in: {val}") raise webob.exc.HTTPBadRequest(msg) # NOTE(gibi): we need to wrap each required trait into a one element # set of traits to keep the format of [{}, {}...] where each set of # traits represent OR relationship required_traits = [ {trait} for trait in all_traits if not trait.startswith('!')] if forbidden_traits and not allow_forbidden: msg = ( f"Invalid query string parameters: Expected 'required' " f"parameter value of the form: HW_CPU_X86_VMX,CUSTOM_MAGIC. " f"Got: {val}") raise webob.exc.HTTPBadRequest(msg) return required_traits, forbidden_traits def normalize_traits_qs_params(req, suffix=''): """Given a webob.Request object, validate and collect required querystring parameters. We begin supporting forbidden traits in microversion 1.22. We begin supporting any-traits and repeating the required param in microversion 1.39. :param req: a webob.Request object to read the params from :param suffix: the string suffix of the request group to read from the request. If empty then the unnamed request group is processed. :returns: a two tuple where: The first item is a list of set of traits. Each set of traits represents a set of required traits in an OR relationship, while different sets in the list represent required traits in an AND relationship. The second item is a set of forbidden traits. :raises webob.exc.HTTPBadRequest: if the format of the query param is not valid """ want_version = req.environ[placement.microversion.MICROVERSION_ENVIRON] allow_forbidden = want_version.matches((1, 22)) allow_any_traits = want_version.matches((1, 39)) required_traits = [] forbidden_traits = set() values = req.GET.getall('required' + suffix) if not allow_any_traits: # to keep the behavior of <= 1.38 we need to make sure that if # the query param is repeated we only consider the last one from the # request values = values[-1:] for value in values: rts, fts = normalize_traits_qs_param( value, allow_forbidden, allow_any_traits) required_traits += rts forbidden_traits |= fts return required_traits, forbidden_traits def normalize_member_of_qs_params(req, suffix=''): """Given a webob.Request object, validate that the member_of querystring parameters are correct. We begin supporting multiple member_of params in microversion 1.24 and forbidden aggregates in microversion 1.32. :param req: webob.Request object :return: A tuple of required_aggs: A list containing sets of UUIDs of required aggregates to filter on forbidden_aggs: A set of UUIDs of forbidden aggregates to filter on :raises `webob.exc.HTTPBadRequest` if the microversion requested is <1.24 and the request contains multiple member_of querystring params :raises `webob.exc.HTTPBadRequest` if the microversion requested is <1.32 and the request contains forbidden format of member_of querystring params with '!' prefix :raises `webob.exc.HTTPBadRequest` if the val parameter is not in the expected format. """ want_version = req.environ[placement.microversion.MICROVERSION_ENVIRON] multi_member_of = want_version.matches((1, 24)) allow_forbidden = want_version.matches((1, 32)) if not multi_member_of and len(req.GET.getall('member_of' + suffix)) > 1: raise webob.exc.HTTPBadRequest( 'Multiple member_of%s parameters are not supported' % suffix) required_aggs = [] forbidden_aggs = set() for value in req.GET.getall('member_of' + suffix): required, forbidden = normalize_member_of_qs_param(value) if required: required_aggs.append(required) if forbidden: if not allow_forbidden: raise webob.exc.HTTPBadRequest( 'Forbidden member_of%s parameters are not supported ' 'in the specified microversion' % suffix) forbidden_aggs |= forbidden return required_aggs, forbidden_aggs def normalize_member_of_qs_param(value): """Parse a member_of query string parameter value. Valid values are one of either - a single UUID - the prefix '!' followed by a single UUID - the prefix 'in:' or '!in:' followed by two or more comma-separated UUIDs. :param value: A member_of query parameter :return: A tuple of: required: A set of aggregate UUIDs at least one of which is required forbidden: A set of aggregate UUIDs all of which are forbidden :raises `webob.exc.HTTPBadRequest` if the value parameter is not in the expected format. """ if "," in value and not ( value.startswith("in:") or value.startswith("!in:")): msg = ("Multiple values for 'member_of' must be prefixed with the " "'in:' or '!in:' keyword using the valid microversion. " "Got: %s") % value raise webob.exc.HTTPBadRequest(msg) required = forbidden = set() if value.startswith('!in:'): forbidden = set(value[4:].split(',')) elif value.startswith('!'): forbidden = set([value[1:]]) elif value.startswith('in:'): required = set(value[3:].split(',')) else: required = set([value]) # Make sure the values are actually UUIDs. for aggr_uuid in (required | forbidden): if not uuidutils.is_uuid_like(aggr_uuid): msg = ("Invalid query string parameters: Expected 'member_of' " "parameter to contain valid UUID(s). Got: %s") % aggr_uuid raise webob.exc.HTTPBadRequest(msg) return required, forbidden def normalize_in_tree_qs_params(value): """Parse a in_tree query string parameter value. :param value: in_tree query parameter: A UUID of a resource provider. :return: A UUID of a resource provider. :raises `webob.exc.HTTPBadRequest` if the val parameter is not in the expected format. """ ret = value.strip() if not uuidutils.is_uuid_like(ret): msg = ("Invalid query string parameters: Expected 'in_tree' " "parameter to be a format of uuid. " "Got: %(val)s") % {'val': value} raise webob.exc.HTTPBadRequest(msg) return ret def run_once(message, logger, cleanup=None): """This is a utility function decorator to ensure a function is run once and only once in an interpreter instance. The decorated function object can be reset by calling its reset function. All exceptions raised by the wrapped function, logger and cleanup function will be propagated to the caller. """ def outer_wrapper(func): @functools.wraps(func) def wrapper(*args, **kwargs): if not wrapper.called: # Note(sean-k-mooney): the called state is always # updated even if the wrapped function completes # by raising an exception. If the caller catches # the exception it is their responsibility to call # reset if they want to re-execute the wrapped function. try: return func(*args, **kwargs) finally: wrapper.called = True else: logger(message) wrapper.called = False def reset(wrapper, *args, **kwargs): # Note(sean-k-mooney): we conditionally call the # cleanup function if one is provided only when the # wrapped function has been called previously. We catch # and reraise any exception that may be raised and update # the called state in a finally block to ensure its # always updated if reset is called. try: if cleanup and wrapper.called: return cleanup(*args, **kwargs) finally: wrapper.called = False wrapper.reset = functools.partial(reset, wrapper) return wrapper return outer_wrapper def roundrobin(*iterables): """roundrobin(iter('ABC'), iter('D'), iter('EF')) --> A D E B F C Returns a new generator consuming items from the passed in iterators in a round-robin fashion. It is adapted from https://docs.python.org/3/library/itertools.html#itertools-recipes """ iterators = map(iter, iterables) for num_active in range(len(iterables), 0, -1): iterators = itertools.cycle(itertools.islice(iterators, num_active)) yield from map(next, iterators) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.280778 openstack_placement-13.0.0/placement/wsgi/0000775000175000017500000000000000000000000020556 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/wsgi/__init__.py0000664000175000017500000001055300000000000022673 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI script for Placement API WSGI handler for running Placement API under Apache2, nginx, gunicorn etc. """ import logging as py_logging import os import os.path from oslo_config import cfg from oslo_log import log as logging from oslo_middleware import cors from oslo_utils import importutils import pbr.version from placement import conf from placement import db_api from placement import deploy osprofiler = importutils.try_import('osprofiler') osprofiler_initializer = importutils.try_import('osprofiler.initializer') profiler = importutils.try_import('osprofiler.opts') CONFIG_FILE = 'placement.conf' # The distribution name is required here, not package. version_info = pbr.version.VersionInfo('openstack-placement') def setup_logging(config): # Any dependent libraries that have unhelp debug levels should be # pinned to a higher default. extra_log_level_defaults = [ 'routes=INFO', ] logging.set_defaults(default_log_levels=logging.get_default_log_levels() + extra_log_level_defaults) logging.setup(config, 'placement') py_logging.captureWarnings(True) def _get_config_files(env=None): """Return a list of one file or None describing config location. If None, that means oslo.config will look in the default locations for a config file. """ if env is None: env = os.environ dirname = env.get('OS_PLACEMENT_CONFIG_DIR', '').strip() if dirname: return [os.path.join(dirname, CONFIG_FILE)] else: return None def _parse_args(config, argv, default_config_files): # register placement's config options conf.register_opts(config) if profiler: profiler.set_defaults(config) _set_middleware_defaults() config(argv[1:], project='placement', version=version_info.version_string(), default_config_files=default_config_files) def setup_profiler(config): if osprofiler and config.profiler.enabled: osprofiler.initializer.init_from_conf( conf=config, context={}, project="placement", service="placement", host="??") def _set_middleware_defaults(): """Update default configuration options for oslo.middleware.""" cors.set_defaults( allow_headers=['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'OpenStack-API-Version'], expose_headers=['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Service-Token', 'OpenStack-API-Version'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] ) def init_application(): # initialize the config system conffiles = _get_config_files() config = cfg.ConfigOpts() conf.register_opts(config) # This will raise cfg.RequiredOptError when a required option is not set # (notably the database connection string). We want this to be a hard fail # that prevents the application from starting. The error will show up in # the wsgi server's logs. _parse_args(config, [], default_config_files=conffiles) # initialize the logging system setup_logging(config) # configure database db_api.configure(config) # dump conf at debug if log_options if config.log_options: config.log_opt_values( logging.getLogger(__name__), logging.DEBUG) setup_profiler(config) # build and return our WSGI app return deploy.loadapp(config) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/wsgi/api.py0000664000175000017500000000142600000000000021704 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI application entry-point for Placement API.""" import threading from placement import wsgi application = None with threading.Lock(): if application is None: application = wsgi.init_application() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/placement/wsgi_wrapper.py0000664000175000017500000000267300000000000022700 0ustar00zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Extend functionality from webob.dec.wsgify for Placement API.""" import webob from oslo_log import log as logging from webob.dec import wsgify from placement import util LOG = logging.getLogger(__name__) class PlacementWsgify(wsgify): def call_func(self, req, *args, **kwargs): """Add json_error_formatter to any webob HTTPExceptions.""" try: super(PlacementWsgify, self).call_func(req, *args, **kwargs) except webob.exc.HTTPException as exc: LOG.debug("Placement API returning an error response: %s", exc) exc.json_formatter = util.json_error_formatter # The exception itself is not passed to json_error_formatter # but environ is, so set the environ. if exc.comment: req.environ[util.ENV_ERROR_CODE] = exc.comment exc.comment = None raise ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.280778 openstack_placement-13.0.0/playbooks/0000775000175000017500000000000000000000000017640 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/playbooks/nested-perfload.yaml0000664000175000017500000000122700000000000023602 0ustar00zuulzuul00000000000000- hosts: all tasks: - name: Ensure {{ ansible_user_dir }}/logs exists become: true file: path: "{{ ansible_user_dir }}/logs" state: directory owner: "{{ ansible_user }}" - name: start placement args: chdir: "{{ ansible_user_dir }}/src/opendev.org/openstack/placement" shell: executable: /bin/bash cmd: gate/perfload-server.sh {{ ansible_user_dir }} - name: placement performance args: chdir: "{{ ansible_user_dir }}/src/opendev.org/openstack/placement" shell: executable: /bin/bash cmd: gate/perfload-nested-runner.sh {{ ansible_user_dir }} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/playbooks/perfload.yaml0000664000175000017500000000122000000000000022313 0ustar00zuulzuul00000000000000- hosts: all tasks: - name: Ensure {{ ansible_user_dir }}/logs exists become: true file: path: "{{ ansible_user_dir }}/logs" state: directory owner: "{{ ansible_user }}" - name: start placement args: chdir: "{{ ansible_user_dir }}/src/opendev.org/openstack/placement" shell: executable: /bin/bash cmd: gate/perfload-server.sh {{ ansible_user_dir }} - name: placement performance args: chdir: "{{ ansible_user_dir }}/src/opendev.org/openstack/placement" shell: executable: /bin/bash cmd: gate/perfload-runner.sh {{ ansible_user_dir }} ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/playbooks/post.yaml0000664000175000017500000000035200000000000021511 0ustar00zuulzuul00000000000000- hosts: all tasks: - name: Copy logs back to the executor synchronize: src: "{{ ansible_user_dir }}/logs" dest: "{{ zuul.executor.log_root }}/" mode: pull rsync_opts: - "--quiet" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2207778 openstack_placement-13.0.0/releasenotes/0000775000175000017500000000000000000000000020326 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1743591511.2887778 openstack_placement-13.0.0/releasenotes/notes/0000775000175000017500000000000000000000000021456 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/add-placment-wsgi-module-ae42938ebe0258cb.yaml0000664000175000017500000000107400000000000031424 0ustar00zuulzuul00000000000000--- features: - | A new module, ``placement.wsgi``, has been added as a place to gather WSGI ``application`` objects. This is intended to ease deployment by providing a consistent location for these objects. For example, if using uWSGI then instead of: .. code-block:: ini [uwsgi] wsgi-file = /bin/placement-api You can now use: .. code-block:: ini [uwsgi] module = placement.wsgi.api:application This also simplifies deployment with other WSGI servers that expect module paths such as gunicorn. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/alloc-candidates-in-tree-f69b0de5ba33096b.yaml0000664000175000017500000000203200000000000031363 0ustar00zuulzuul00000000000000--- features: - | Add support for the ``in_tree`` query parameter to the ``GET /allocation_candidates`` API. It accepts a UUID for a resource provider. If this parameter is provided, the only resource providers returned will be those in the same tree with the given resource provider. The numbered syntax ``in_tree`` is also supported. This restricts providers satisfying the Nth granular request group to the tree of the specified provider. This may be redundant with other ``in_tree`` values specified in other groups (including the unnumbered group). However, it can be useful in cases where a specific resource (e.g. DISK_GB) needs to come from a specific sharing provider (e.g. shared storage). For example, a request for ``VCPU`` and ``VGPU`` resources from ``myhost`` and ``DISK_GB`` resources from ``sharing1`` might look like:: ?resources=VCPU:1&in_tree= &resources1=VGPU:1&in_tree1= &resources2=DISK_GB:100&in_tree2= ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/allocation-candidate-mappings-e00cf6deadcee9ab.yaml0000664000175000017500000000150600000000000033067 0ustar00zuulzuul00000000000000--- features: - | In microversion 1.34_ the body of the response to a ``GET /allocation_candidates`` request_ has been extended to include a ``mappings`` field with each allocation request. The value is a dictionary associating request group suffixes with the uuids of those resource providers that satisfy the identified request group. For convenience, this mapping can be included in the request payload for ``POST /allocations``, ``PUT /allocations/{consumer_uuid}``, and ``POST /reshaper``, but it will be ignored. .. _1.34: https://docs.openstack.org/placement/latest/placement-api-microversion-history.html#request-group-mappings-in-allocation-candidates .. _request: https://developer.openstack.org/api-ref/placement/?expanded=list-allocation-candidates-detail#list-allocation-candidates ././@PaxHeader0000000000000000000000000000020500000000000011452 xustar0000000000000000111 path=openstack_placement-13.0.0/releasenotes/notes/allocation-candidate-same_subtree-aeed7b2570293dfb.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/allocation-candidate-same_subtree-aeed7b2570293dfb.yam0000664000175000017500000000115700000000000033262 0ustar00zuulzuul00000000000000--- features: - | From microversion ``1.36``, a new ``same_subtree`` queryparam on ``GET /allocation_candidates`` is supported. It accepts a comma-separated list of request group suffix strings ($S). Each must exactly match a suffix on a granular group somewhere else in the request. Importantly, the identified request groups need not have a resources$S. If this is provided, at least one of the resource providers satisfying a specified request group must be an ancestor of the rest. The ``same_subtree`` query parameter can be repeated and each repeated group is treated independently. ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=openstack_placement-13.0.0/releasenotes/notes/allocation-candidates-root_required-bfe4f96f96a2a5db.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/allocation-candidates-root_required-bfe4f96f96a2a5db.y0000664000175000017500000000122200000000000033417 0ustar00zuulzuul00000000000000--- features: - | Microversion 1.35_ adds support for the ``root_required`` query parameter to the ``GET /allocation_candidates`` API. It accepts a comma-delimited list of trait names, each optionally prefixed with ``!`` to indicate a forbidden trait, in the same format as the ``required`` query parameter. This restricts allocation requests in the response to only those whose (non-sharing) tree's root resource provider satisfies the specified trait requirements. .. _1.35: https://docs.openstack.org/placement/latest/placement-api-microversion-history.html#support-root_required-queryparam-on-get-allocation_candidates ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/allocation_conflict_retry_count-329daae86059f5ec.yaml0000664000175000017500000000151200000000000033311 0ustar00zuulzuul00000000000000--- fixes: - | When a single resource provider receives many concurrent allocation writes, retries may be performed server side when there is a resource provider generation conflict. When those retries are all consumed, the client receives an HTTP 409 response and may choose to try the request again. In an environment where high levels of concurrent allocation writes are common, such as a busy clustered hypervisor, the default retry count may be too low. See story 2006467_ A new configuation setting, ``[placement]/allocation_conflict_retry_count``, has been added to address this situation. It defines the number of times to retry, server-side, writing allocations when there is a resource provider generation conflict. .. _2006467: https://storyboard.openstack.org/#!/story/2006467 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/any-traits-support-d3807c27e5a8865c.yaml0000664000175000017500000000074300000000000030323 0ustar00zuulzuul00000000000000--- features: - | Microversion 1.39 adds support for the ``in:`` syntax in the ``required`` query parameter in the ``GET /resource_providers`` API as well as to the ``required`` and ``requiredN`` query params of the ``GET /allocation_candidates`` API. Also adds support for repeating the ``required`` and ``requiredN`` parameters in the respective APIs. So:: required=in:T3,T4&required=T1,!T2 is supported and it means T1 and not T2 and (T3 or T4). ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/bug-1792503-member-of-5c10df94caf3bd08.yaml0000664000175000017500000000074400000000000030173 0ustar00zuulzuul00000000000000--- fixes: - | Previously, when an aggregate was specified by the ``member_of`` query parameter in the ``GET /allocation_candidates`` operation, the non-root providers in the aggregate were excluded unless their root provider was also in the aggregate. With this release, the non-root providers directly associated with the aggregate are also considered. See the `Bug#1792503`_ for details. .. _Bug#1792503: https://bugs.launchpad.net/nova/+bug/1792503 ././@PaxHeader0000000000000000000000000000025000000000000011452 xustar0000000000000000146 path=openstack_placement-13.0.0/releasenotes/notes/bug-2070257-allocation-candidates-generation-limit-and-strategy.yaml-e73796898163fb55.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/bug-2070257-allocation-candidates-generation-limit-and0000664000175000017500000000605700000000000033237 0ustar00zuulzuul00000000000000--- fixes: - | In a deployment with wide and symmetric provider trees, i.e. where there are multiple children providers under the same root having inventory from the same resource class (e.g. in case of nova's mdev GPU or PCI in Placement features) if the allocation candidate request asks for resources from those children RPs in multiple request groups the number of possible allocation candidates grows rapidly. E.g.: * 1 root, 8 child RPs with 1 unit of resource each a_c requests 6 groups with 1 unit of resource each => 8*7*6*5*4*3=20160 possible candidates * 1 root, 8 child RPs with 6 unit of resources each a_c requests 6 groups with 6 unit of resources each => 8^6=262144 possible candidates Placement generates these candidates fully before applying the limit parameter provided in the allocation candidate query to be able do a random sampling if ``[placement]randomize_allocation_candidates`` is True. Placement takes excessive time and memory to generate this amount of allocation candidates and the client might time out waiting for the response or the Placement API service run out of memory and crash. To avoid request timeout or out of memory events a new ``[placement]max_allocation_candidates`` config option is implemented. This limit is applied not after the request limit but *during* the candidate generation process. So this new option can be used to limit the runtime and memory consumption of the Placement API service. The new config option is defaulted to ``-1``, meaning no limit, to keep the legacy behavior. We suggest to tune this config in the affected deployments based on the memory available for the Placement service and the timeout setting of the clients. A good initial value could be around ``100000``. If the number of generated allocation candidates is limited by the ``[placement]max_allocation_candidates`` config option then it is possible to get candidates from a limited set of root providers (e.g. compute nodes) as placement uses a depth-first strategy, i.e. generating all candidates from the first root before considering the next one. To avoid this issue a new config option ``[placement]allocation_candidates_generation_strategy`` is introduced with two possible values: * ``depth-first``, generates all candidates from the first viable root provider before moving to the next. This is the default and this triggers the old behavior * ``breadth-first``, generates candidates from viable roots in a round-robin fashion, creating one candidate from each viable root before creating the second candidate from the first root. This is the possible behavior. In a deployment where ``[placement]max_allocation_candidates`` is configured to a positive number we recommend to set ``[placement]allocation_candidates_generation_strategy`` to ``breadth-first``. .. _Bug#2070257: https://bugs.launchpad.net/nova/+bug/2070257 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/consumer_type-857b812aef10381e.yaml0000664000175000017500000000175400000000000027401 0ustar00zuulzuul00000000000000--- features: - | Microversion 1.38 adds support for a ``consumer_type`` (required) key in the request body of ``POST /allocations``, ``PUT /allocations/{consumer_uuid}`` and in the response of ``GET /allocations/{consumer_uuid}``. ``GET /usages`` requests gain a ``consumer_type`` key as an optional query parameter to filter usages based on consumer_types. ``GET /usages`` response will group results based on the consumer type and will include a new ``consumer_count`` key per type irrespective of whether the ``consumer_type`` was specified in the request. If an ``all`` ``consumer_type`` key is provided, all results are grouped under one key, ``all``. Older allocations which were not created with a consumer type are considered to have an ``unknown`` ``consumer_type``. If an ``unknown`` ``consumer_type`` key is provided, all results are grouped under one key, ``unknown``. The corresponding changes to ``POST /reshaper`` are included. ././@PaxHeader0000000000000000000000000000021100000000000011447 xustar0000000000000000115 path=openstack_placement-13.0.0/releasenotes/notes/create-allocation-empty-mapping-field-f5f97de6df891362.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/create-allocation-empty-mapping-field-f5f97de6df8913620000664000175000017500000000071500000000000033104 0ustar00zuulzuul00000000000000--- fixes: - | Since microversion 1.34, it has been possible to provide a ``mappings`` field when creating new allocations via the ``POST /allocations`` or ``PUT /allocations/{allocation_id}`` APIs. This field should be a a dictionary associating request group suffixes with a list of UUIDs identifying the resource providers that satisfied each group. Due to a typo, this was allowing an empty object (``{}``). This is now corrected. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/db-auto-sync-e418f3f181958c7c.yaml0000664000175000017500000000064500000000000027027 0ustar00zuulzuul00000000000000--- features: - | A configuration setting ``[placement_database]/sync_on_startup`` is added which, if set to ``True``, will cause database schema migrations to be called when the placement web application is started. This avoids the need to call ``placement-manage db sync`` separately. To preserve backwards compatibility and avoid unexpected changes, the default of the setting is ``False``. ././@PaxHeader0000000000000000000000000000021000000000000011446 xustar0000000000000000114 path=openstack_placement-13.0.0/releasenotes/notes/deprecate-json-formatted-policy-file-dbec7a29325316de.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/deprecate-json-formatted-policy-file-dbec7a29325316de.0000664000175000017500000000176000000000000033056 0ustar00zuulzuul00000000000000--- upgrade: - | The default value of ``[oslo_policy] policy_file`` config option has been changed from ``policy.json`` to ``policy.yaml``. Operators who are utilizing customized or previously generated static policy JSON files (which are not needed by default), should generate new policy files or convert them in YAML format. Use the `oslopolicy-convert-json-to-yaml `_ tool to convert a JSON to YAML formatted policy file in backward compatible way. deprecations: - | Use of JSON policy files was deprecated by the ``oslo.policy`` library during the Victoria development cycle. As a result, this deprecation is being noted in the Wallaby cycle with an anticipated future removal of support by ``oslo.policy``. As such operators will need to convert to YAML policy files. Please see the upgrade notes for details on migration of any custom policy files. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/deprecate-placement-policy-file-1777dc2e92d8363c.yaml0000664000175000017500000000103100000000000032620 0ustar00zuulzuul00000000000000--- deprecations: - | The ``[placement]/policy_file`` configuration option is deprecated and its usage is being replaced with the more standard ``[oslo_policy]/policy_file`` option. If you do not override policy with custom rules you will have nothing to do. If you do override the placement default policy then you will need to update your configuration to use the ``[oslo_policy]/policy_file`` option. By default, the ``[oslo_policy]/policy_file`` option will be used if the file it points at exists. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/drop-python-2-aabea7dcdeca7ebf.yaml0000664000175000017500000000021000000000000027763 0ustar00zuulzuul00000000000000--- upgrade: - | Python 2.7 support has been dropped. The minimum version of Python now supported by placement is Python 3.6. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/drop-python-3-6-and-3-7-9db9b12a73106e26.yaml0000664000175000017500000000020100000000000030316 0ustar00zuulzuul00000000000000--- upgrade: - | Python 3.6 & 3.7 support has been dropped. The minimum version of Python now supported is Python 3.8. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/drop-python-3-6-and-3-7-c3d8c440800ed885.yaml0000664000175000017500000000020100000000000030325 0ustar00zuulzuul00000000000000--- upgrade: - | Python 3.6 & 3.7 support has been dropped. The minimum version of Python now supported is Python 3.8. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/drop-python-3-8-4636cf15992db5e7.yaml0000664000175000017500000000016600000000000027307 0ustar00zuulzuul00000000000000--- upgrade: - | Python 3.8 support was dropped. The minimum version of Python now supported is Python 3.9. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/fix-osprofiler-support-78b34a92c32fd30f.yaml0000664000175000017500000000027400000000000031243 0ustar00zuulzuul00000000000000--- fixes: - | By fixing bug `story/2005842`_ the OSProfiler support works again in the placement WSGI. .. _story/2005842: https://storyboard.openstack.org/#!/story/2005842 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/granular-request-suffix-a7fd857eadc16b56.yaml0000664000175000017500000000124500000000000031533 0ustar00zuulzuul00000000000000--- features: - | In microversion 1.33, the syntax for granular groupings of resource, required/forbidden trait, and aggregate association requests introduced in `1.25`_ has been extended to allow, in addition to numbers, strings from 1 to 64 characters in length consisting of a-z, A-Z, 0-9, ``_``, and ``-``. This is done to allow naming conventions (e.g., ``resources_COMPUTE`` and ``resources_NETWORK``) to emerge in situations where multiple services are collaborating to make requests. .. _1.25: https://docs.openstack.org/placement/latest/placement-api-microversion-history.html#granular-resource-requests-to-get-allocation-candidates ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/http_proxy_to_wsgi-6c8392d7eaed7c8d.yaml0000664000175000017500000000035100000000000030710 0ustar00zuulzuul00000000000000--- features: - | The ``HTTPProxyToWSGI`` middleware is now enabled in api pipeline. With this middleware enabled, actual client addresses are recorded in request logs in stead addresses of intermediate load balancers. ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=openstack_placement-13.0.0/releasenotes/notes/limit-nested-allocation-candidates-0886e569d15ad951.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/limit-nested-allocation-candidates-0886e569d15ad951.ya0000664000175000017500000000077200000000000032727 0ustar00zuulzuul00000000000000--- fixes: - | Limiting nested resource providers with the ``limit=N`` query parameter when calling ``GET /allocation_candidates`` could result in incomplete provider summaries. This is now fixed so that all resource providers that are in the same trees as any provider mentioned in the limited allocation requests are shown in the provider summaries collection. For more information see `story/2005859`_. .. _story/2005859: https://storyboard.openstack.org/#!/story/2005859 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/negative-aggregate-membership-1dde3cbe27c69279.yaml0000664000175000017500000000203600000000000032527 0ustar00zuulzuul00000000000000--- features: - | Add support for forbidden aggregates in ``member_of`` queryparam in ``GET /resource_providers`` and ``GET /allocation_candidates``. Forbidden aggregates are prefixed with a ``!`` from microversion ``1.32``. This negative expression can also be used in multiple ``member_of`` parameters:: ?member_of=in:,&member_of=&member_of=! would translate logically to "Candidate resource providers must be at least one of agg1 or agg2, definitely in agg3 and definitely *not* in agg4." We do NOT support ``!`` within the ``in:`` list:: ?member_of=in:,,! but we support ``!in:`` prefix:: ?member_of=!in:,, which is equivalent to:: ?member_of=!&member_of=!&member_of=! where returned resource providers must not be in agg1, agg2, or agg3. Specifying forbidden aggregates in granular requests, ``member_of`` is also supported from the same microversion, ``1.32``. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/placement-status-upgrade-check-3aa412fd6cb1e4bc.yaml0000664000175000017500000000026600000000000032754 0ustar00zuulzuul00000000000000--- upgrade: - | A ``placement-status upgrade check`` command is added which can be used to check the readiness of a placement deployment before initiating an upgrade. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/policy-defaults-refresh-d903d15cd51ac1a8.yaml0000664000175000017500000000303300000000000031363 0ustar00zuulzuul00000000000000--- features: - | The Placement policies have been modified to drop the system scope. Every API policy is scoped to project. This means that system scoped users will get 403 permission denied error. Currently, Placement supports the following default roles: * ``admin`` (Legacy admin) * ``service`` * ``project reader`` (for project resource usage) For the details on what changed from the existing policy, please refer to the `RBAC new guidelines`_. We have implemented phase-1 and phase-2 of the `RBAC new guidelines`_. Currently, scope checks and new defaults are disabled by default. You can enable them by switching the below config option in ``placement.conf`` file:: [oslo_policy] enforce_new_defaults=True enforce_scope=True upgrade: - | All the placement policies have been dropped the system scope and they are now project scoped only. The scope of policy is not overridable in policy.yaml. If you have enabled the scope enforcement and using system scope token to access placement APIs, you need to switch to the project scope token. Enforce scope is not enabled by default but it will be enabled by default in the future release. The old defaults are deprecated but enforced by default which will be removed in the future release. ``placement:reshaper:reshape`` policy default has been changed to ``service`` role only. .. _`RBAC new guidelines`: https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/rbac-policy-support-94f84c29da81c331.yaml0000664000175000017500000000524000000000000030423 0ustar00zuulzuul00000000000000--- features: - | The default policies provided by placement have been updated to add support for read-only roles. This is part of a broader community effort to support read-only roles and implement secure, consistent default policies. Refer to `the Keystone documentation`__ for more information on the reason for these changes. Previously, all policies defaulted to ``rule:admin_api``, which mapped to ``role:admin``. The following rules now default to ``role:admin and system_scope:all`` instead: - ``placement:allocation_candidates:list`` - ``placement:allocations:delete`` - ``placement:allocations:list`` - ``placement:allocations:manage`` - ``placement:allocations:update`` - ``placement:reshaper:reshape`` - ``placement:resource_classes:list`` - ``placement:resource_classes:create`` - ``placement:resource_classes:show`` - ``placement:resource_classes:update`` - ``placement:resource_classes:delete`` - ``placement:resource_providers:create`` - ``placement:resource_providers:delete`` - ``placement:resource_providers:list`` - ``placement:resource_providers:show`` - ``placement:resource_providers:update`` - ``placement:resource_providers:aggregates:list`` - ``placement:resource_providers:aggregates:update`` - ``placement:resource_providers:allocations:list`` - ``placement:resource_providers:inventories:create`` - ``placement:resource_providers:inventories:delete`` - ``placement:resource_providers:inventories:list`` - ``placement:resource_providers:inventories:show`` - ``placement:resource_providers:inventories:update`` - ``placement:resource_providers:traits:delete`` - ``placement:resource_providers:traits:list`` - ``placement:resource_providers:traits:update`` - ``placement:resource_providers:usages`` - ``placement:traits:list`` - ``placement:traits:show`` - ``placement:traits:update`` - ``placement:traits:delete`` The following rule now defaults to ``(role:reader and system_scope:all) or role:reader and project_id:%(project_id)s`` instead: - ``placement:usages`` More information on these policy defaults can be found in the `documentation`__. __ https://docs.openstack.org/keystone/latest/admin/service-api-protection.html __ https://docs.openstack.org/placement/latest/configuration/policy.html - | The default policy used for the ``/usages`` API, ``placement:usages``, has been updated to allow project users to view information about resource usage for their project, specified using the ``project_id`` query string parameter. Previously this API was restricted to admins. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/re-parenting-providers-94dcedff45b35bf7.yaml0000664000175000017500000000026300000000000031421 0ustar00zuulzuul00000000000000--- features: - | With the new microversion ``1.37`` placement now supports re-parenting and un-parenting resource providers via ``PUT /resource_providers/{uuid}`` API. ././@PaxHeader0000000000000000000000000000020600000000000011453 xustar0000000000000000112 path=openstack_placement-13.0.0/releasenotes/notes/remove-deprecated-placement-policy-cba1414ca626302d.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/remove-deprecated-placement-policy-cba1414ca626302d.ya0000664000175000017500000000030100000000000033031 0ustar00zuulzuul00000000000000--- upgrade: - | The deprecated ``placement`` policy has now been removed. This policy was used prior to the introduction of granular policies in the nova 18.0.0 (Rocky) release. ././@PaxHeader0000000000000000000000000000020700000000000011454 xustar0000000000000000113 path=openstack_placement-13.0.0/releasenotes/notes/remove-placement-policy-file-config-bb9bb26332413a77.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/remove-placement-policy-file-config-bb9bb26332413a77.y0000664000175000017500000000063200000000000032716 0ustar00zuulzuul00000000000000--- upgrade: - | The deprecated ``[placement]/policy_file`` configuration option is removed Use the more standard ``[oslo_policy]/policy_file`` config option. If you do not override policy with custom rules you will have nothing to do. If you do override the placement default policy then you will need to update your configuration to use the ``[oslo_policy]/policy_file`` config option. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/set_root_provider_id-53930a5d1dbd374f.yaml0000664000175000017500000000053400000000000031002 0ustar00zuulzuul00000000000000--- features: - | A new online data migration has been added to populate missing ``root_provider_id`` in the resource_providers table. This can be run during the normal placement-manage db online_data_migrations routine. See the `Bug#1803925`_ for more details. .. _Bug#1803925: https://bugs.launchpad.net/nova/+bug/1803925 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/stein-prelude-779b0dbfe65cf9ac.yaml0000664000175000017500000000164300000000000027576 0ustar00zuulzuul00000000000000--- prelude: | The 1.0.0 release of Placement is the first release where the Placement code is hosted in its own repository_ and managed as its own OpenStack project. Because of this, the majority of changes are not user-facing. There are a small number of new features (including microversion 1.31_) and bug fixes, listed below. A new document, `Upgrading from Nova to Placement`_, has been created. It explains the steps required to upgrade to extracted Placement from Nova and to migrate data from the ``nova_api`` database to the ``placement_database``. .. _repository: https://opendev.org/openstack/placement .. _1.31: https://docs.openstack.org/placement/latest/placement-api-microversion-history.html#add-in-tree-queryparam-on-get-allocation-candidates-maximum-in-stein .. _Upgrading from Nova to Placement: https://docs.openstack.org/placement/latest/upgrade/to-stein.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/train-prelude-06739452ba2f66d9.yaml0000664000175000017500000000231300000000000027201 0ustar00zuulzuul00000000000000--- prelude: | The 2.0.0 release of placement is the first release where placement is available solely from its own project and must be installed separately from nova. If the extracted placement is not already in use, prior to upgrading to Train, the Stein version of placement must be installed. See `Upgrading from Nova to Placement`_ for details. 2.0.0 adds a suite of features which, combined, enable targeting candidate providers that have complex trees modeling NUMA layouts, multiple devices, and networks where affinity between and grouping among the members of the tree are required. These features will help to enable NFV and other high performance workloads in the cloud. Also added is support for forbidden aggregates which allows groups of resource providers to only be used for specific purposes, such as reserving a group of compute nodes for licensed workloads. Extensive benchmarking and profiling have led to massive performance enhancements, especially in environments with large numbers of resource providers and high concurrency. .. _Upgrading from Nova to Placement: https://docs.openstack.org/placement/latest/upgrade/to-stein.html ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/train-require-root-provider-ids-60bc374ac354f41e.yaml0000664000175000017500000000054000000000000032724 0ustar00zuulzuul00000000000000--- upgrade: - | The ``Missing Root Provider IDs`` upgrade check in the ``placement-status upgrade check`` command will now produce a failure if it detects any ``resource_providers`` records with a null ``root_provider_id`` value. Run the ``placement-manage db online_data_migrations`` command to heal these types of records. ././@PaxHeader0000000000000000000000000000021500000000000011453 xustar0000000000000000119 path=openstack_placement-13.0.0/releasenotes/notes/upgrade-status-check-incomplete-consumers-3362d7db55dd8bdf.yaml 22 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/notes/upgrade-status-check-incomplete-consumers-3362d7db55dd0000664000175000017500000000034200000000000033412 0ustar00zuulzuul00000000000000--- upgrade: - | An upgrade check was added to the ``placement-status upgrade check`` command for incomplete consumers which can be remedied by running the ``placement-manage db online_data_migrations`` command. ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.292778 openstack_placement-13.0.0/releasenotes/source/0000775000175000017500000000000000000000000021626 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/2023.1.rst0000664000175000017500000000021000000000000023076 0ustar00zuulzuul00000000000000=========================== 2023.1 Series Release Notes =========================== .. release-notes:: :branch: unmaintained/2023.1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/2023.2.rst0000664000175000017500000000020200000000000023100 0ustar00zuulzuul00000000000000=========================== 2023.2 Series Release Notes =========================== .. release-notes:: :branch: stable/2023.2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/2024.1.rst0000664000175000017500000000020200000000000023100 0ustar00zuulzuul00000000000000=========================== 2024.1 Series Release Notes =========================== .. release-notes:: :branch: stable/2024.1 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/2024.2.rst0000664000175000017500000000020200000000000023101 0ustar00zuulzuul00000000000000=========================== 2024.2 Series Release Notes =========================== .. release-notes:: :branch: stable/2024.2 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/conf.py0000664000175000017500000000456400000000000023136 0ustar00zuulzuul00000000000000# -*- coding: utf-8 -*- # # Configuration file for the Sphinx documentation builder. # # This file does only contain a selection of the most common options. For a # full list see the documentation: # http://www.sphinx-doc.org/en/master/config # -- Path setup -------------------------------------------------------------- # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # # import os # import sys # sys.path.insert(0, os.path.abspath('.')) # -- Project information ----------------------------------------------------- # Keep these empty so that releasesnotes do not display an associated # version. # The short X.Y version version = '' # The full version, including alpha/beta/rc tags release = '' # -- General configuration --------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # The master toctree document. master_doc = 'index' # General information about the project. project = 'Placement Release Notes' copyright = '2018, Placement developers' author = 'OpenStack' # openstackdocstheme options openstackdocs_repo_name = 'openstack/placement' openstackdocs_auto_name = False openstackdocs_use_storyboard = True # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. # language = None # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path . exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'native' # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = 'openstackdocs' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/index.rst0000664000175000017500000000072200000000000023470 0ustar00zuulzuul00000000000000 Placement Release Notes ======================= .. note:: The placement service was extracted from the nova service at the beginning of the Stein cycle. Release history prior to Stein can be found in the `Nova Release Notes `_. .. toctree:: :maxdepth: 1 unreleased 2024.2 2024.1 2023.2 2023.1 zed yoga xena wallaby victoria ussuri train stein ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/stein.rst0000664000175000017500000000022100000000000023475 0ustar00zuulzuul00000000000000=================================== Stein Series Release Notes =================================== .. release-notes:: :branch: stable/stein ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/train.rst0000664000175000017500000000017600000000000023501 0ustar00zuulzuul00000000000000========================== Train Series Release Notes ========================== .. release-notes:: :branch: stable/train ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/unreleased.rst0000664000175000017500000000016000000000000024504 0ustar00zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/ussuri.rst0000664000175000017500000000020200000000000023704 0ustar00zuulzuul00000000000000=========================== Ussuri Series Release Notes =========================== .. release-notes:: :branch: stable/ussuri ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/victoria.rst0000664000175000017500000000022000000000000024172 0ustar00zuulzuul00000000000000============================= Victoria Series Release Notes ============================= .. release-notes:: :branch: unmaintained/victoria ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/wallaby.rst0000664000175000017500000000021400000000000024010 0ustar00zuulzuul00000000000000============================ Wallaby Series Release Notes ============================ .. release-notes:: :branch: unmaintained/wallaby ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/xena.rst0000664000175000017500000000020000000000000023303 0ustar00zuulzuul00000000000000========================= Xena Series Release Notes ========================= .. release-notes:: :branch: unmaintained/xena ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/yoga.rst0000664000175000017500000000020000000000000023307 0ustar00zuulzuul00000000000000========================= Yoga Series Release Notes ========================= .. release-notes:: :branch: unmaintained/yoga ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/releasenotes/source/zed.rst0000664000175000017500000000017400000000000023144 0ustar00zuulzuul00000000000000======================== Zed Series Release Notes ======================== .. release-notes:: :branch: unmaintained/zed ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/requirements.txt0000664000175000017500000000165000000000000021123 0ustar00zuulzuul00000000000000# Requirements lower bounds listed here are our best effort to keep them up to # date but we do not test them so no guarantee of having them all correct. If # you find any incorrect lower bounds, let us know or propose a fix. pbr>=3.1.1 # Apache-2.0 SQLAlchemy>=1.4.0 # MIT keystonemiddleware>=4.18.0 # Apache-2.0 Routes>=2.3.1 # MIT WebOb>=1.8.2 # MIT jsonschema>=3.2.0 # MIT requests>=2.25.0 # Apache-2.0 oslo.concurrency>=3.26.0 # Apache-2.0 oslo.config>=6.7.0 # Apache-2.0 oslo.context>=2.22.0 # Apache-2.0 oslo.log>=4.3.0 # Apache-2.0 oslo.serialization>=2.25.0 # Apache-2.0 oslo.utils>=4.5.0 # Apache-2.0 oslo.db>=8.6.0 # Apache-2.0 oslo.policy>=4.4.0 # Apache-2.0 oslo.middleware>=3.31.0 # Apache-2.0 oslo.upgradecheck>=1.3.0 # Apache-2.0 # NOTE(efried): Sync lower-constraints.txt for os-traits & os-resource-classes. os-resource-classes>=1.1.0 # Apache-2.0 os-traits>=3.3.0 # Apache-2.0 microversion-parse>=0.2.1 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.292778 openstack_placement-13.0.0/setup.cfg0000664000175000017500000000331400000000000017457 0ustar00zuulzuul00000000000000[metadata] name = openstack-placement summary = Resource provider inventory usage and allocation service description_file = README.rst author = OpenStack author_email = openstack-discuss@lists.openstack.org url = https://docs.openstack.org/placement/latest/ project_urls = Bug Tracker = https://bugs.launchpad.net/placement Documentation = https://docs.openstack.org/placement/latest/ API Reference = https://docs.openstack.org/api-ref/placement/ Source Code = https://opendev.org/openstack/placement Release Notes = https://docs.openstack.org/releasenotes/placement/ python_requires = >=3.9 classifier = Development Status :: 5 - Production/Stable Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: 3 :: Only Programming Language :: Python :: 3 Programming Language :: Python :: 3.9 Programming Language :: Python :: 3.10 Programming Language :: Python :: 3.11 Programming Language :: Python :: 3.12 [files] packages = placement [entry_points] oslo.config.opts = placement.conf = placement.conf.opts:list_opts oslo.config.opts.defaults = nova.conf = placement.conf.base:set_lib_defaults oslo.policy.enforcer = placement = placement.policy:get_enforcer oslo.policy.policies = placement = placement.policies:list_rules console_scripts = placement-manage = placement.cmd.manage:main placement-status = placement.cmd.status:main wsgi_scripts = placement-api = placement.wsgi:init_application [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/setup.py0000664000175000017500000000127100000000000017350 0ustar00zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import setuptools setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/test-requirements.txt0000664000175000017500000000135300000000000022100 0ustar00zuulzuul00000000000000hacking>=6.1.0,<6.2.0 # Apache-2.0 coverage>=4.4.1 # Apache-2.0 fixtures>=3.0.0 # Apache-2.0/BSD # NOTE(tetsuro): Local testing on osx may have problems to install packages, # psycopg2 and PYMySQL. You can workaround them using sys_platform qualifier. # See the https://review.opendev.org/#/c/671249/ for details. However, we # don't use it here to keep the consistency with global requirements. psycopg2>=2.8 # LGPL/ZPL PyMySQL>=0.8.0 # MIT License oslotest>=3.5.0 # Apache-2.0 stestr>=1.0.0 # Apache-2.0 testtools>=2.2.0 # MIT bandit>=1.1.0 # Apache-2.0 gabbi>=1.35.0 # Apache-2.0 # placement functional tests cryptography>=2.7 wsgi-intercept>=1.7.0 # MIT License # needed to generate osprofiler config options osprofiler>=1.4.0 # Apache-2.0 ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1743591511.292778 openstack_placement-13.0.0/tools/0000775000175000017500000000000000000000000016775 5ustar00zuulzuul00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/tools/flake8wrap.sh0000775000175000017500000000073300000000000021403 0ustar00zuulzuul00000000000000#!/bin/sh # # A simple wrapper around flake8 which makes it possible # to ask it to only verify files changed in the current # git HEAD patch. # # Intended to be invoked via tox: # # tox -epep8 -- -HEAD # if test "x$1" = "x-HEAD" ; then shift files=$(git diff --name-only HEAD~1 | tr '\n' ' ') echo "Running flake8 on ${files}" diff -u --from-file /dev/null ${files} | flake8 --diff "$@" else echo "Running flake8 on all files" exec flake8 "$@" fi ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/tools/test-setup.sh0000775000175000017500000000353400000000000021456 0ustar00zuulzuul00000000000000#!/bin/bash -xe # This script will be run by OpenStack CI before unit tests are run, # it sets up the test system as needed. # Developers should setup their test systems in a similar way. # This setup needs to be run as a user that can run sudo. # The root password for the MySQL database; pass it in via # MYSQL_ROOT_PW. DB_ROOT_PW=${MYSQL_ROOT_PW:-insecure_slave} # This user and its password are used by the tests, if you change it, # your tests might fail. DB_USER=openstack_citest DB_PW=openstack_citest sudo -H mysqladmin -u root password $DB_ROOT_PW # It's best practice to remove anonymous users from the database. If # an anonymous user exists, then it matches first for connections and # other connections from that host will not work. sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e " DELETE FROM mysql.user WHERE User=''; FLUSH PRIVILEGES; CREATE USER '$DB_USER'@'%' IDENTIFIED BY '$DB_PW'; GRANT ALL PRIVILEGES ON *.* TO '$DB_USER'@'%' WITH GRANT OPTION;" # Now create our database. mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e " SET default_storage_engine=MYISAM; DROP DATABASE IF EXISTS openstack_citest; CREATE DATABASE openstack_citest CHARACTER SET utf8;" # Same for PostgreSQL # Setup user root_roles=$(sudo -H -u postgres psql -t -c " SELECT 'HERE' from pg_roles where rolname='$DB_USER'") if [[ ${root_roles} == *HERE ]];then sudo -H -u postgres psql -c "ALTER ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" else sudo -H -u postgres psql -c "CREATE ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" fi # Store password for tests cat << EOF > $HOME/.pgpass *:*:*:$DB_USER:$DB_PW EOF chmod 0600 $HOME/.pgpass # Now create our database psql -h 127.0.0.1 -U $DB_USER -d template1 -c "DROP DATABASE IF EXISTS openstack_citest" createdb -h 127.0.0.1 -U $DB_USER -l C -T template0 -E utf8 openstack_citest ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1743591465.0 openstack_placement-13.0.0/tox.ini0000664000175000017500000001027400000000000017154 0ustar00zuulzuul00000000000000[tox] minversion = 4.6.0 envlist = py3,functional,pep8 [testenv] usedevelop = True allowlist_externals = bash rm env install_command = python -I -m pip install -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} {opts} {packages} setenv = VIRTUAL_ENV={envdir} LANGUAGE=en_US LC_ALL=en_US.utf-8 OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=160 PYTHONDONTWRITEBYTECODE=1 deps = -r{toxinidir}/test-requirements.txt # For a venv that doesn't use stestr commands must be overridden. commands = stestr run {posargs} passenv = OS_DEBUG GENERATE_HASHES # there is also secret magic in subunit-trace which lets you run in a fail only # mode. To do this define the TRACE_FAILONLY environmental variable. [testenv:functional{,-py39,-py310,-py311,-py312}] commands = stestr --test-path=./placement/tests/functional run {posargs} [testenv:pep8] description = Run style checks. skip_install = true deps = pre-commit commands = pre-commit run --all-files --show-diff-on-failure [testenv:fast8] description = Run style checks on the changes made since HEAD~. For a full run including docs, use 'pep8' commands = bash tools/flake8wrap.sh -HEAD [testenv:genconfig] commands = oslo-config-generator --config-file=etc/placement/config-generator.conf [testenv:genpolicy] commands = oslopolicy-sample-generator --config-file=etc/placement/policy-generator.conf [testenv:cover] # TODO(stephenfin): Remove the PYTHON hack below in favour of a [coverage] # section once we rely on coverage 4.3+ # # https://bitbucket.org/ned/coveragepy/issues/519/ setenv = {[testenv]setenv} PYTHON=coverage run --source placement --parallel-mode commands = coverage erase stestr --test-path=./placement/tests run {posargs} coverage combine coverage html -d cover coverage xml -o cover/coverage.xml coverage report [testenv:debug] commands = oslo_debug_helper {posargs} [testenv:venv] deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt -r{toxinidir}/doc/requirements.txt commands = {posargs} [testenv:docs] description = Build all documentation including API guides and refs. deps = -r{toxinidir}/doc/requirements.txt commands = rm -rf doc/build sphinx-build -W --keep-going -b html -j auto doc/source doc/build/html # Test the redirects whereto doc/build/html/.htaccess doc/test/redirect-tests.txt [testenv:pdf-docs] basepython = python3 deps = {[testenv:docs]deps} allowlist_externals = make commands = sphinx-build -W -b latex doc/source doc/build/pdf make -C doc/build/pdf [testenv:api-ref] description = Generate the API ref. Called from CI scripts to test and publish to docs.openstack.org. deps = {[testenv:docs]deps} commands = rm -rf api-ref/build sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html [testenv:releasenotes] description = Generate release notes. deps = {[testenv:docs]deps} commands = rm -rf releasenotes/build sphinx-build -W -b html -d releasenotes/build/doctrees releasenotes/source releasenotes/build/html [testenv:bandit] # NOTE(browne): This is required for the integration test job of the bandit # project. Please do not remove. commands = bandit -r placement -x tests -n 5 -ll [flake8] enable-extensions = H106,H203,H904 # H405 is a good guideline, but sometimes multiline doc strings just don't have # a natural summary line. Rejecting code for this reason is wrong. # W504 skipped since you must choose either W503 or W504 (they conflict) ignore = H405, W504 exclude = .venv,.git,.tox,dist,*lib/python*,*egg,build,releasenotes # To get a list of functions that have a complexity of 19 or more, set # max-complexity to 19 and run 'tox -epep8'. # 19 is currently the most complex thing we have max-complexity=19 [testenv:bindep] # Do not install any requirements. We want this to be fast and work even if # system dependencies are missing, since it's used to tell you what system # dependencies are missing! This also means that bindep must be installed # separately, outside of the requirements files, and develop mode disabled # explicitly to avoid unnecessarily installing the checked-out repo too usedevelop = False skipsdist = True deps = bindep commands = bindep test